Research Scientist Intern 2023 (Safety)

London, UK

Applications have closed

DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View company page

At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at DeepMind investigates questions related to objective specification, robustness, interpretability, and trust in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. As an intern, you’ll work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.

Snapshot

DeepMind is active within the wider research community through publications and partners with many of the world’s top academics and academic institutions. We have built a hardworking and engaging culture, combining the best of academia with product led environments, providing an ambitious balance of structure and flexibility. Our approach encourages collaboration across all groups within the Research team, leading to ambitious creativity and the scope for creative breakthroughs at the forefront of research. 

The role

Interns work with our Research Scientists to:

  • Identify and investigate possible failure modes for current and future AI systems, and proactively develop solutions to address them
  • Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team’s broader technical agenda
  • Collaborate with research teams within and outside of DeepMind to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
  • Report and present research findings and developments to internal and external collaborators with effective written and verbal communication
  • Suggest and engage in team collaborations to meet research goals

    See our blog: DeepMind Safety Research.


About you

In order to apply, you should be:

  • Studying towards a PhD in a technical field 
  • Available for 16-20 weeks within 2023 (please note: there is no fixed start month)
  • Excited about analysing and improving the safety of AI systems

 

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Deep Learning Machine Learning PhD Research Statistics

Region: Europe
Country: United Kingdom
Job stats:  37  9  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.