Data Scientist, Model Alignment and Safety

San Francisco

Applications have closed

OpenAI

Introducing Sora: Creating video from text

View company page

The OpenAI Alignment team fine-tunes ML models to exhibit better and safer behavior. We want our models to follow the user’s intentions while avoiding causing emotional harm, perpetuating systemic biases, spreading misinformation, and other potential harms. Our goal is for increasingly intelligent models to benefit humanity. For our models to benefit humans, we need to think deeply about the data we train them on. An approach we’re currently pursuing is fine-tuning our pre-trained models on data that captures the behavior we care about (see our previous work). We’re looking for a Data Scientist who will lead the data collection process for fine-tuning our models. You will play a central role in ensuring that OpenAI’s models are actually doing what we want them to do before (and after) we deploy them as products, including the OpenAI API!This role lies at the intersection of two lines of expertise: (1) thinking deeply about how our models should behave and what data we need to collect to achieve this, and collaborating with relevant partners at OpenAI along with experts in fields related to applied ethics and the harms of technology, and (2) writing code to collect data from human labelers and perform detailed data analysis.

In this role you will:

  • Collect and manage high-quality data for fine-tuning models at OpenAI before they get deployed as products.
  • Perform experiments to understand the impact of collected data on model behavior, including statistical analysis and data visualization.
  • Work with human labeling services to refine the procedure and guidelines to collect high-quality human annotation data, and implement quality control processes.
  • Collaborate with our Policy team: help make decisions on how our model should behave and what data we need to collect to achieve this, engage with literature on eliciting preferences and participatory design and distill the findings into something we can engineer, advise on the limits of what our models can and can’t do.
  • Collaborate with our Applied team: ensure the data we collect is useful for our deployed products.

This role may be a fit if you:

  • Have strong data analysis skills.
  • Care deeply about the data we use to train ML models and reducing the harms of AI systems.
  • Have strong communication skills and are excited to collaborate across multiple teams.

Nice to have:

  • Industry experience collecting data for training machine learning models where thinking about the alignment, safety, or social impact of the system was a central concern.
  • Interest or experience in training machine learning models.
  • Interest and familiarity with sociology, critical data studies, mechanism design for social good, or other relevant fields.
  • Experience with managing human labeling processes and working with labelers.
About OpenAI
We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
This position is subject to a background check for any convictions directly related to its duties and responsibilities. Only job-related convictions will be considered and will not automatically disqualify the candidate. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodations via accommodation@openai.com.
Benefits 
- Health, dental, and vision insurance for you and your family - Unlimited time off (we encourage 4+ weeks per year) - Parental leave - Flexible work hours - Lunch and dinner each day - 401(k) plan with matching

Tags: AGI APIs Data analysis Data visualization Machine Learning ML models OpenAI

Perks/benefits: Career development Flex hours Flex vacation Health care Insurance Parental leave Unlimited paid time off

Region: North America
Country: United States
Job stats:  19  5  1
Category: Data Science Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.