Researcher - Algorithm Accountability and Research Ethics

Nuffield Foundation

Location We're London based but open to remote working from within the UK
Salary £34,433 per annum FTE (dependent upon experience). 1 year FTC, full time (35hrs per week) - part time arrangements considered.
Team Ada Lovelace Institute
  • Closing: 5:00pm, 5th Feb 2021 GMT

Job Description

The role 

The Ada Lovelace Institute is hiring a full-time researcher on a 12-month contract for our Algorithm accountability research pillar. Working in tandem with our Senior Researcher on Algorithm accountability, this position will initially work on a six-month project to explore institutional ethical review practices for AI research and lead/assist on other projects relating to Algorithm accountability throughout the year. This role is an excellent opportunity for a junior-to-mid-career researcher interested in how we can make AI more accountable, including exploring how to turn ethical principles for AI research into actionable practices for universities and private firms. 

Ideal candidates will have: 

  • Personal experience or awareness of how AI research is conducted at public and private institutions

  • Qualitative interviewing and analysis skills

  • Excellent project management skills

Among other outputs, you will be responsible for:

  • Creating a literature review for research ethics review processes, including how existing university and corporate research firms conduct reviews of the ethical implications, broader societal impacts, and methodologies of AI research

  • Co-leading qualitative interviews with experts in the public and private sector on research ethics and impacts

  • Organising convenings, including workshops and roundtables, with experts on algorithm accountability

  • Developing recommendations for how AI research labs should evaluate the impacts of their research

  • Co-authoring a public report via the Ada Lovelace Institute website

We expect this project will run between March – July 2021. Additional projects may relate to auditing and impact assessments of AI products and algorithms, regulatory inspection of algorithmic systems, and other kinds of projects that will fall under the Algorithm accountability research pillar.

About you

You may have a background working in the tech industry, or researching and co-ordinating for an academic organisation, research institute or community charity. You may have a university degree, or have gained experience from an apprenticeship, trainee programme, bootcamp or on the job. You are curious and passionate about the issues which arise at the intersection of technology and society and are committed to bringing an interdisciplinary and intersectional lens to understanding them. You’ll be comfortable taking the initiative, working independently and to short deadlines at times. You’ll enjoy working in a team environment, be willing to jump into projects and keen to explore areas of policy, technology, and practice that you don’t already understand. You’ll appreciate the importance of exceptionally high standards of rigour in research, but also want to think creatively about communicating and influencing in novel ways.

For further information about the role, please click here to download the full job description.

How to apply

The closing date for applications is 12:00 midday GMT on Friday 5th February 2021, with interviews taking place via video the week of the 15th February.

If you are from a background that is underrepresented in the sector (for example you are from a community of colour, did not go to university or had free school meals as a child), and you would like to discuss how your experience may be transferable to this role, you can book time with one of our team who will be pleased to have a chat with you. Please note that this person will not be involved in the recruitment process. You can request this by emailing hello@adalovelaceinstitute.org (and we will not ask you to disclose your background).

About the Ada Lovelace Institute

The Ada Lovelace Institute is an independent research institute and deliberative body funded and incubated by the Nuffield Foundation in 2018. Our mission is to ensure data and artificial intelligence work for people and society. We do this by building evidence and fostering rigorous debate on how data and AI affect people and society.  We recognise the power asymmetries that exist in ethical and legal debates around the development of data-driven technologies and seek to level those asymmetries by convening diverse voices and creating a shared understanding of the ethical issues arising from data and AI. Finally, we seek to define and inform good practice in the design and deployment of AI technologies.

After little more than a year of operation, the Institute has emerged as a leading independent voice on the ethical and societal impacts of data and AI. We have built relationships in the public, private and civil society sectors in the UK and internationally. Some of our most impactful work to date includes our rapid evidence review on contact tracing apps, Exit Through the App Store?, and our public attitudes survey on facial recognition, Beyond Face Value. Our research broadly focuses on four pillars:

  • Data for the public good: evaluating and understanding the social value of data; promoting data stewardship; advocating for data rights and regulation, and addressing issues of data injustice.

  •  Algorithm accountability: understanding how algorithmic systems are changing the delivery of public services; exploring mechanisms for auditing and assessing algorithmic; developing new methods to ensure algorithms are transparent and accountable to those affected by them.

  • Justice and equity: understanding how data and AI interact with identity, race and ethnicity; identifying mechanisms for preventing the inequitable and discriminatory impact of data driven technologies in domains such as health, education, and criminal justice.

  • COVID-19 technologies: exploring AI, data, and healthcare, particularly digital and technical responses to the COVID pandemic such as contact tracing apps and vaccine certification schemes.

Our research takes an interconnected approach to issues such as power, social justice, distributional impact and climate change (read our strategy to find out more), and our team have a wide range of expertise that cuts across policy, technology, academia, industry, law and human rights.  We value diversity in backgrounds, skills, perspectives, and life experiences. Because we are part of the Nuffield Foundation, we are a small team with the practical support of an established organisation that cares for its employees.

The Ada Lovelace Institute values diversity, equity and inclusion in our workplace. Our work and culture are strengthened by our differences in experience, national origin, religion, culture, sexual orientation, and other backgrounds. We welcome applications from people of colour, women, the LGBTQIA community, people with disabilities, and people who identify with other traditionally underrepresented and minoritised backgrounds.

We aim to be a collaborative, welcome and informal place to work. Before Covid-19 the team worked flexibly, with some working from home regularly or on an ad hoc basis. We now operate fully remotely, using collaborative working tools such as Microsoft Teams with regular video calls). We are currently a 15-person team and expect to return to some in-person working in 2021 (and will have a shiny new office in Farringdon in early 2021), but we are open to staff working remotely for the foreseeable future, including in UK geographical locations outside of London. Find out more about us at https://www.adalovelaceinstitute.org/

Removing bias from the hiring process

Applications closed Fri 5th Feb 2021

x

Removing bias from the hiring process

  • Your application will be anonymously reviewed by our hiring team to ensure fairness
  • You’ll need a CV/résumé, but it’ll only be considered if you score well on the anonymous review

Applications closed Fri 5th Feb 2021