Privacy Engineer III, Machine Learning
Bengaluru, Karnataka, India
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience designing solutions that maintain or enhance the privacy posture of the organization by analyzing and assessing proposed engineering designs (e.g., product features, infrastructure systems) and influencing stakeholders.
- 2 years of experience applying privacy technologies (e.g., differential privacy, automated access management solutions, etc.), and customizing existing solutions and frameworks to meet organizational needs.
Preferred qualifications:
- Experience managing multiple high priority requests, while determining resource allocation (e.g., time, prioritization) to solve the problems in a fast-paced, changing organization.
- Experience in end-to-end development of ML models and applications.
- Knowledge of common regulatory frameworks (e.g., GDPR, CCPA).
- Understanding of privacy principles, and a passion for keeping people and their data safe.
About the job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
The Governance team manages risk and compliance objectives, specifically risks about data, products, and software systems within Google. Our aim is to ensure that systems, products, and data are managed responsibly to keep our users, employees, and partners safe.
Google's innovations in AI, especially Generative AI, have created a new and exciting domain with immense potential. As innovation moves forward, Google and the broader industry need increased privacy, safety, and security standards for building and deploying AI responsibly.
To help meet this need, the Generative AI Assessments team's mission is to build up Google's assessment capabilities for generative AI applications.
Responsibilities
- Conduct privacy impact assessments and drive privacy outcomes for artificial intelligence datasets, models, products, and features
- Escalate critical and novel artificial intelligence risks to central and product leadership forums, as needed.
- Design and develop technical documentation across teams to drive consistent privacy decisions within the artificial intelligence domain.
- Work with internal tools and systems for understanding and assessing machine learning data and model lineage, properties, and risks.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Engineering Generative AI Machine Learning ML models Privacy Security
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Data Science Manager jobs
- Open Marketing Data Analyst jobs
- Open Lead Data Analyst jobs
- Open Data Engineer II jobs
- Open Senior Business Intelligence Analyst jobs
- Open MLOps Engineer jobs
- Open Principal Data Engineer jobs
- Open Power BI Developer jobs
- Open Data Scientist II jobs
- Open Business Intelligence Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Business Data Analyst jobs
- Open Sr Data Engineer jobs
- Open Product Data Analyst jobs
- Open Data Analyst Intern jobs
- Open Sr. Data Scientist jobs
- Open Senior Data Architect jobs
- Open Research Scientist jobs
- Open Big Data Engineer jobs
- Open Azure Data Engineer jobs
- Open Principal Data Scientist jobs
- Open Data Quality Analyst jobs
- Open Manager, Data Engineering jobs
- Open Data Product Manager jobs
- Open Data quality-related jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open PhD-related jobs
- Open Deep Learning-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open PyTorch-related jobs
- Open TensorFlow-related jobs
- Open APIs-related jobs
- Open NLP-related jobs
- Open Consulting-related jobs
- Open LLMs-related jobs
- Open CI/CD-related jobs
- Open Snowflake-related jobs
- Open Generative AI-related jobs
- Open Kubernetes-related jobs
- Open Hadoop-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open Databricks-related jobs