Machine Learning Engineer, Baseline (Remote)
Remote
Applications have closed
The Role:We are looking for a Machine Learning Engineer to help build cutting-edge systems for our mission to map and monitor the planet's forests. The Baseline team develops models to evaluate the performance of conservation and reforestation projects and compute the number of carbon credits they should be issued. As a member of the Baseline team, you will research, design, implement, and deploy models to ensure that forest carbon credits represent real emissions reductions.
A typical day includes implementing new machine learning models with remote sensing data, designing experiments to validate their performance, pair coding with other engineers, and discussing results and experiment plans with scientists. The quality of model outputs directly impacts the quality of forest carbon projects. Model validation and uncertainty quantification are core values for our team.
We're looking for engineers who find joy in the craft of building and want to make an impact. Engineers who push forward initiatives by asking great questions, cutting through ambiguity, and organizing to win. Engineers who are relentlessly detail-oriented, methodical in their approach to understanding trade-offs, place the highest emphasis on building, and building quickly.
Location:This role is fully remote.
About Pachama:Pachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachama’s core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide.
A typical day includes implementing new machine learning models with remote sensing data, designing experiments to validate their performance, pair coding with other engineers, and discussing results and experiment plans with scientists. The quality of model outputs directly impacts the quality of forest carbon projects. Model validation and uncertainty quantification are core values for our team.
We're looking for engineers who find joy in the craft of building and want to make an impact. Engineers who push forward initiatives by asking great questions, cutting through ambiguity, and organizing to win. Engineers who are relentlessly detail-oriented, methodical in their approach to understanding trade-offs, place the highest emphasis on building, and building quickly.
Location:This role is fully remote.
About Pachama:Pachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachama’s core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide.
What You Will Help Us With:
- Training machine learning models to evaluate the performance of conservation and reforestation projects.
- Designing statistical frameworks and experiments to assess the quality of these models on real-world data.
- Optimizing these models to run efficiently on large amounts of geospatial and remote sensing data.
- Helping to construct tools that enable research and operations to produce high-quality performance metrics for forest carbon projects.
- Advocate for scientific and engineering best practices applied to our machine learning work.
Experience & Skills We're Looking For:
- Machine learning and statistics fundamentals with an ability to apply these skills to new domains like forest science and remote sensing.
- Strong software engineering practices and a background in Python programming, debugging/profiling, and version control. Some examples of tools in our tech stack include Kubernetes, Dask, Flyte. Open source geospatial tools that are also part of our tech stack include Rasterio, Geopandas, and Xarray.
- Experience working in cluster environments and an understanding of the related distributed systems concepts (CPU/GPU interactions/transfers, latency/throughput bottlenecks, pipelining/multiprocessing, etc).
- Ability to find and synthesize related academic literature to apply these learnings to model and experiment design.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Distributed Systems Engineering GPU Kubernetes Machine Learning ML models Open Source Python Research Statistics
Region:
Remote/Anywhere
Job stats:
67
20
0
Categories:
Engineering Jobs
Machine Learning Jobs
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Marketing Data Analyst jobs
- Open MLOps Engineer jobs
- Open Junior Data Scientist jobs
- Open AI Engineer jobs
- Open Data Engineer II jobs
- Open Senior Data Architect jobs
- Open Sr Data Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Analytics Engineer jobs
- Open Power BI Developer jobs
- Open Manager, Data Engineering jobs
- Open Product Data Analyst jobs
- Open Principal Data Engineer jobs
- Open Business Data Analyst jobs
- Open Data Quality Analyst jobs
- Open Data Manager jobs
- Open Sr. Data Scientist jobs
- Open Data Scientist II jobs
- Open Big Data Engineer jobs
- Open Business Intelligence Developer jobs
- Open Data Analyst Intern jobs
- Open Principal Data Scientist jobs
- Open ETL Developer jobs
- Open Azure Data Engineer jobs
- Open Data Product Manager jobs
- Open Business Intelligence-related jobs
- Open Data quality-related jobs
- Open Privacy-related jobs
- Open Data management-related jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open Deep Learning-related jobs
- Open APIs-related jobs
- Open PyTorch-related jobs
- Open PhD-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open Snowflake-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Data warehouse-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs
- Open LLMs-related jobs
- Open DevOps-related jobs
- Open Kubernetes-related jobs