Researcher: AI and Genomics Futures (Ada Lovelace Institute)
- Closing: 11:59am, 25th Aug 2021 BST
The Ada Lovelace Institute is hiring a researcher on a 24-month contract for a horizon scanning project focused on the ethics of AI and genomics.
The project is a collaboration with the Nuffield Council on Bioethics and possibly further partners. Working in tandem with our Associate Director of Research Partnerships, this researcher will manage a multi-phase project that has three aims:
1. To facilitate conversations and foster links between researchers and practitioners working across different sectors at the intersection of AI and genomics research and innovation (R&I).
2. To identify emerging and future trends in AI genomics R&I and its application, and clarify the potential ethical and societal issues raised.
3. To explore the horizons and potential opportunities of AI genomics research and development, and the socio-ethical implications with the intention of informing and shaping future policy making and industry practice in this area.
Among other outputs, you will be responsible for:
Carrying out a literature review of AI and genomics issues.
Leading qualitative interviews with experts in AI and genomics applications and research.
Organising and facilitating futures workshops with experts on genomics and AI, which will include scenario building exercises and analysis.
Turning the results of these workshops into recommendations for policymakers and other key stakeholders.
Preparing documents for publication to a high standard on behalf of the partner organisations.
We expect this project will run between October 2021 – December 2022, with additional follow on projects to be included after that period.
This role is an excellent opportunity for a junior-to-mid-career scholar interested in exploring the emerging intersections of AI and genomics research, including the kinds of complex ethical, social, and legal issues that will arise and salient commercial trends that may develop.
Context for the project
The convergence of AI and genomics technologies is poised to have a significant impact on medical research, healthcare and societies across the globe. Together these technologies could, in future, create novel research techniques, enable more accurate predictions to be made about a person’s health, perhaps from birth, and create personalised treatments and therapies. In its 10-year genomics healthcare strategy, the UK Government has committed to understanding, through the use of machine learning and AI, how genomically informed healthcare and prevention could be improved and how these could be implemented in the NHS. The implications of research in this area are likely to extend beyond healthcare, for example to education and criminal justice.
Research and collaboration on AI and genomics is taking place in and across the broad fields of health research and AI technology development. However, those working in these sectors often operate within different working cultures and, in some cases, under different governance and ethical frameworks. They have yet to meaningfully connect on the opportunities and challenges posed by AI and genomics and may have different endpoints in mind.
In addition, there are gaps in knowledge about the actors that are undertaking research and innovation (R&I) at the intersection of AI and genomics, where this research is taking place, and the topics that are under investigation. An understanding of this research could help identify possible future trends and the applications it could lead to (both health and non-health).
With a clearer picture of possible future trends, the potential ethical and social issues raised could be considered. Such insights could inform decision makers (such as policy makers, regulators, ethics review boards, journal editors, research funders, and biotech leaders) as they prepare for different future scenarios of AI genomics. Issues to consider are likely to include: the use of AI genomics research for non-health purposes and linkage with non-health data sets, algorithmic fairness and representation in genomic datasets, consent and ‘right to know’ issues, misuse of genomic information, shifts in responsibility for individuals’ health, and the responsibilities of different actors operating in this field.
Ideal candidates will have some combination of the following skills:
Excellent project management skills, with the ability to manage multiple stakeholders across several organisations
Experience in futures methodologies, including scenario building, workshop facilitation, qualitative interviewing and analysis skills.
Experience in public engagement methodologies, such as citizen juries and participatory futures methodologies.
A deep understanding of the ethical challenges and emerging applications of AI and/or genomics.
How to apply
The closing date for applications is 11:59am (BST) on Wednesday 25th August 2021, with interviews taking place via video the following week.
You will be required to respond to some questions as part of your online application, as well as uploading an up to date copy of your CV. The Applied platform lets you save an application and resume it ahead of submitting before the application deadline.
We strongly encourage applicants from backgrounds that are underrepresented in the research, policy and technology sectors (for example those from a marginalised community, who did not go to university or had free school meals as a child). We are committed to tackling societal injustice and inequality through our work, and believe that all kinds of experiences and backgrounds can contribute to this mission.
About the Ada Lovelace Institute
The Ada Lovelace Institute is an independent research institute and deliberative body funded and incubated by the Nuffield Foundation in 2018. Our mission is to ensure data and artificial intelligence work for people and society. We do this by building evidence and fostering rigorous debate on how data and AI affect people and society. We recognise the power asymmetries that exist in ethical and legal debates around the development of data-driven technologies and seek to level those asymmetries by convening diverse voices and creating a shared understanding of the ethical issues arising from data and AI. Finally, we seek to define and inform good practice in the design and deployment of AI technologies.
After little more than a year of operation, the Institute has emerged as a leading independent voice on the ethical and societal impacts of data and AI. We have built relationships in the public, private and civil society sectors in the UK and internationally. Some of our most impactful work to date includes our rapid evidence review on contact tracing apps, Exit Through the App Store?, and our public attitudes and engagement work on biometrics, including our Beyond Face Value survey and Citizens’ Biometrics Council.
We aim to be a collaborative, welcome and informal place to work. Before Covid-19 the team worked flexibly, with some working from home regularly or on an ad hoc basis. We now operate fully remotely, using collaborative working tools such as Microsoft Teams with regular video calls).
We aim to be a collaborative, welcome and informal place to work. Before Covid-19 the team worked flexibly, with some working from home regularly or on an ad hoc basis. We now operate fully remotely, using collaborative working tools such as Microsoft Teams with regular video calls. We are currently a 15-person team and have returned to some in-person working (in a shiny new office in Farringdon), but we are open to staff working remotely for part of the week and will consider requests to work remotely from other parts of the UK. Find out more about us at https://www.adalovelaceinstitute.org/
Removing bias from the hiring process
Applications closed Wed 25th Aug 2021
Removing bias from the hiring process
- Your application will be anonymously reviewed by our hiring team to ensure fairness
- You’ll need a CV/résumé, but it’ll only be considered if you score well on the anonymous review
Applications closed Wed 25th Aug 2021