Machine Learning Engineer, Baseline (Remote)

Remote

Applications have closed

Pachama

Harnessing AI to drive carbon capture and protect global forests

View company page

The Role:We are looking for a Machine Learning Engineer to help build cutting-edge systems for our mission to map and monitor the planet's forests. The Baseline team develops models to evaluate the performance of conservation and reforestation projects and compute the number of carbon credits they should be issued. As a member of the Baseline team, you will research, design, implement, and deploy models to ensure that forest carbon credits represent real emissions reductions.
A typical day includes implementing new machine learning models with remote sensing data, designing experiments to validate their performance, pair coding with other engineers, and discussing results and experiment plans with scientists. The quality of model outputs directly impacts the quality of forest carbon projects. Model validation and uncertainty quantification are core values for our team.
We're looking for engineers who find joy in the craft of building and want to make an impact. Engineers who push forward initiatives by asking great questions, cutting through ambiguity, and organizing to win. Engineers who are relentlessly detail-oriented, methodical in their approach to understanding trade-offs, place the highest emphasis on building, and building quickly.
Location:This role is fully remote.
About Pachama:Pachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachama’s core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide.

What You Will Help Us With:

  • Training machine learning models to evaluate the performance of conservation and reforestation projects.
  • Designing statistical frameworks and experiments to assess the quality of these models on real-world data.
  • Optimizing these models to run efficiently on large amounts of geospatial and remote sensing data.
  • Helping to construct tools that enable research and operations to produce high-quality performance metrics for forest carbon projects.
  • Advocate for scientific and engineering best practices applied to our machine learning work.

Experience & Skills We're Looking For:

  • Machine learning and statistics fundamentals with an ability to apply these skills to new domains like forest science and remote sensing.
  • Strong software engineering practices and a background in Python programming, debugging/profiling, and version control. Some examples of tools in our tech stack include Kubernetes, Dask, Flyte. Open source geospatial tools that are also part of our tech stack include Rasterio, Geopandas, and Xarray.
  • Experience working in cluster environments and an understanding of the related distributed systems concepts (CPU/GPU interactions/transfers, latency/throughput bottlenecks, pipelining/multiprocessing, etc).
  • Ability to find and synthesize related academic literature to apply these learnings to model and experiment design.
Even if you don’t meet all these requirements, we encourage you to apply if this job description excites you. We are looking for ambitious people to help make an impact on climate change. That purpose requires us to bring together a diverse set of people with different backgrounds, perspectives, and skills to create solutions that work for all.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Distributed Systems Engineering GPU Kubernetes Machine Learning ML models Open Source Python Research Statistics

Region: Remote/Anywhere
Job stats:  67  20  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.