Research Scientist, Security

London, UK

Applications have closed

DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View company page

Applications close at 5pm on Friday 10th March and will be reviewed shortly thereafter.

At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and secure development and deployment at every step. AI Security research at DeepMind investigates questions related to security, privacy, robustness, interpretability, and trust in machine learning systems and the intersection of traditional information systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind: to build safe, secure and socially beneficial AI systems.

Research on AI security draws on expertise in computer security, vulnerability research, deep learning, reinforcement learning, statistics, and foundations of agent models. Research Scientists work on the forefront of technical approaches to designing systems that are secure by design and function as intended while discovering and mitigating emerging risks, in close collaboration with other AI research groups within and outside of DeepMind.

DeepMind is active within the wider research community through publications and partners with many of the world’s top academics and academic institutions. We have built a hardworking and engaging culture, combining the best of academia with product led environments, providing an ambitious balance of structure and flexibility.

Our approach encourages collaboration across all groups within the Research and Engineering Team, leading to ambitious creativity and the scope for creative breakthroughs at the forefront of research.

About Us

We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.

We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible

Our list of benefits is extensive, and we’re happy to discuss this further throughout the interview process.

The role

Key responsibilities:

  • Identify and investigate failure modes for current and future AI systems, and partner with the broader engineering group to build solutions to address them
  • Conduct empirical or theoretical research into technical safety and security mechanisms for AI systems in coordination with the broader technical agenda of research at DeepMind
  • Collaborate with research teams externally and internally to ensure that AI capabilities research is informed by and adheres to the most advanced safety and security research and protocols.
  • Report and present research findings and developments to internal and external collaborators with effective written and verbal communication.

About you

To set you up for success as a Research Scientist on this team, we look for the following skills and experience:

  • PhD in a technical field or equivalent practical experience.
  • Strong engineering track record including experience with distributed systems and one or more machine learning frameworks.

In addition, the following would be an advantage:

  • PhD in machine learning, computer science, statistics, computational neuroscience, mathematics, or physics.
  • Relevant research experience in deep learning, machine learning, reinforcement learning, statistics, or computational neuroscience.
  • A real passion for AI/ML and security
Competitive salary applies

 

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Computer Science Deep Learning Distributed Systems Engineering Machine Learning Mathematics PhD Physics Privacy Reinforcement Learning Research Security Statistics

Perks/benefits: Career development Competitive pay

Region: Europe
Country: United Kingdom
Job stats:  64  9  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.