Research Engineer - Scalable Alignment

London, UK

Full Time Senior-level / Expert
DeepMind logo


Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View all employer listings

Apply now Apply later

At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives, and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.


At DeepMind, we've built a unique culture and work environment where long-term ambitious research can flourish. Our special interdisciplinary team combines the best techniques from deep learning, reinforcement learning and systems neuroscience to build general-purpose learning algorithms. We have already made a number of high profile breakthroughs towards building artificial general intelligence, and we have all the ingredients in place to make further significant progress over the coming year!

About us

We’re a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit.

We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.

We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible

Our list of benefits is extensive, and we’re happy to discuss this further throughout the interview process.

The role

Alignment Research Engineers at DeepMind work directly on a wide range of conceptual, theoretical and empirical research projects, typically in collaboration with Research Scientists. You will apply your engineering and research skills to accelerate research progress through developing prototypes, designing and scaling up algorithms, overcoming technical obstacles, and designing, running, and analysing experiments.

The team

The goal of the Scalable Alignment Team (SAT) is to make highly capable agents do what humans want, even when it is difficult for humans to know what that is.  This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents.

To achieve this, we ask humans what they want and train agents to do that, assisting humans in judgements by providing evidence, outlining arguments, and pointing out subtleties. As language is a key medium for human communication, much of SAT’s work revolves around large language models (LLMs) such as Chinchilla, fine-tuning these models using techniques such as human preference RL, debate, citing evidence, and LM red teaming.  We view LLMs both as a tool for safety by enabling human-machine communication and as examples of ML models that may cause both near-term and long-term harms. Since our goal is to do what humans want, the uncertainties involved are about humans, not just ML; we need to carefully design the interaction between humans and machines to achieve answers humans would endorse after careful reflection.

We view human interaction as only one component of safety, and work with many other teams at DeepMind to build a unified overall strategy, including Alignment, Ethics and Society, and Strategy and Governance.

Key responsibilities

  • Design and run experiments with large language models to further the team’s research agenda, including collecting human evaluation data and fine-tuning LLMs with reinforcement learning.
  • Code architecture and design for fast iteration and ease of use, promoting code reuse and model reuse.
  • Testing methodologies to allow rapid experimentation without breaking the code.
  • Performance optimizations in single machines and many machines.

About you


  • Bachelor's degree in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Ability to write code in at least one programming language, preferably Python or C++.
  • Knowledge of mathematics, statistics and machine learning concepts needed to understand research papers in the field.
  • Ability to communicate technical ideas effectively, e.g. through discussions, whiteboard sessions, written documentation.

Nice to have:

  • Knowledge of ML/scientific libraries such as TensorFlow, JAX, PyTorch, Numpy and Pandas.
  • Machine learning and research experience in industry, academia and personal projects.
  • Experience pre-training or fine-tuning large language models.
  • Familiarity with distributed scientific computation, whether CPU, GPU, TPU, or heterogenous.
  • Experience with large-scale system design.
  • A passion for making AGI go well.



Job perks/benefits: Career development
Job region: Europe
Job country: United Kingdom
Job stats:  19  0  0
  • Share this job via
  • or

Other jobs like this

Explore more AI/ML/Data Science career opportunities

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.