Privacy Engineer III, Machine Learning

Bengaluru, Karnataka, India

Google

Google’s mission is to organize the world's information and make it universally accessible and useful.

View company page

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience designing solutions that maintain or enhance the privacy posture of the organization by analyzing and assessing proposed engineering designs (e.g., product features, infrastructure systems) and influencing stakeholders.
  • 2 years of experience applying privacy technologies (e.g., differential privacy, automated access management solutions, etc.), and customizing existing solutions and frameworks to meet organizational needs.

Preferred qualifications:

  • Experience managing multiple high priority requests, while determining resource allocation (e.g., time, prioritization) to solve the problems in a fast-paced, changing organization.
  • Experience in end-to-end development of ML models and applications.
  • Knowledge of common regulatory frameworks (e.g., GDPR, CCPA).
  • Understanding of privacy principles, and a passion for keeping people and their data safe.

About the job

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.

The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.

Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.

To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.

The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.

The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.

Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.

To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.

The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.

Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.

To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.

Responsibilities

  • Conduct privacy impact assessments and drive privacy outcomes for artificial intelligence datasets, models, products, and features
  • Escalate critical and novel artificial intelligence risks to central and product leadership forums, as needed.
  • Design and develop technical documentation across teams to drive consistent privacy decisions within the artificial intelligence domain.
  • Work with internal tools and systems for understanding and assessing machine learning data and model lineage, properties, and risks.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Engineering Generative AI Machine Learning ML models Privacy Security

Region: Asia/Pacific
Country: India
Job stats:  2  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.