ML Research Scientist — LLM Safety

San Francisco, CA

Dynamo AI

Dynamo AI offers end-to-end AI Performance, Security, and Compliance solutions for delivering Enterprise-grade Generative AI.

View all jobs at Dynamo AI

Apply now Apply later

At Dynamo AI, we believe that LLMs must be developed with safety, privacy, and real-world responsibility in mind. Our ML team comes from a culture of academic research driven to democratize AI advancements responsibly. By operating at the intersection of ML research and industry applications, our team empowers Fortune 500 companies’ adoption of frontier research for their next generation of LLM products. Join us if you:• Care about pushing the frontier of novel research on performant, responsible, and unbiased LLMs and don’t accept the status quo of helpfulness-safety tradeoffs.• Are excited at the idea of democratizing state-of-the-art research on safe and responsible AI and motivated to work at a 2023 CB Insights Top 100 AI Startup and see your impact on end customers in the timeframe of weeks not years.• Wish to work on the premier platform for compliant LLMs. We provide the fastest end to end solution to deploy research in the real world with our fast-paced team of ML Ph.D.’s and builders, free of Big Tech / academic bureaucracy and constraints.

Responsibilities

  • Spearhead an LLM research domain with a focus on safety, quality, explainability, evaluation, and/or adversarial attacks.
  • Push the envelope by developing novel techniques and research that delivers the world’s most harmless and helpful models. Your research will directly empower our customers to more feasibly deploy safe and responsible LLMs.
  • Co-author papers, patents, and presentations with our research team. Collaborate with our engineering, product, and sales teams to deliver state-of-the-art techniques to end customers.
  • Lead research projects end-to-end, including generating high quality synthetic data, training LLMs, and conducting rigorous benchmarking.
  • Deliver generalizable, scalable, and reproducible algorithms.

Qualifications

  • Deep research domain knowledge in LLM safety techniques.
  • Extensive research experience in aligning, training, and/or attacking multiple different types of LLM models and architectures in the real world. Comfortability with leading end-to-end projects.
  • Adaptability and flexibility. In both the academic and startup world, a new finding in the community may necessitate an abrupt shift in focus. You must be able to learn, implement, and extend state-of-the-art research.
Dynamo AI is committed to maintaining compliance with all applicable local and state laws regarding job listings and salary transparency. This includes adhering to specific regulations that mandate the disclosure of salary ranges in job postings or upon request during the hiring process. We strive to ensure our practices promote fairness, equity, and transparency for all candidates.
Salary for this position may vary based on several factors, including the candidate's experience, expertise, and the geographic location of the role. Compensation is determined to ensure competitiveness and equity, reflecting the cost of living in different regions and the specific skills and qualifications of the candidate.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  3  0  0

Tags: Architecture Engineering LLMs Machine Learning Privacy Research Responsible AI

Perks/benefits: Equity / stock options Startup environment

Region: North America
Country: United States

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.