Machine Learning Engineer (Remote)

San Francisco, California

Applications have closed

Pachama

Harnessing AI to drive carbon capture and protect global forests

View company page


We are looking for a Machine Learning Engineer to help build cutting-edge systems for our mission to map and monitor the planet's forests. You will research, design, implement and deploy deep learning models that advance the state of the art in carbon mapping. As a member of the Verification team, you will research, design, implement, and deploy deep learning models that advance the state of the art in carbon mapping. A typical day to day includes reading deep learning code/papers, implementing described models and algorithms, adapting them to our setting, working with other engineers, and incrementally tracking and improving performance. We're looking for engineers who find joy in the craft of building and want to make an impact. Engineers who push forward initiatives by asking great questions, cutting through ambiguity, and organizing to win. Engineers who are relentlessly detail-oriented, methodical in their approach to understanding trade-offs, place the highest emphasis on building, and building quickly.
Who we are:
Pachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachama’s core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide.
We are backed by mission-aligned investors including Breakthrough Energy Ventures, Amazon Climate Fund, Chris Sacca, Saltwater Ventures, and Paul Graham.

Responsibilities:

  • Develop state-of-the-art algorithms in one or all of the following areas: deep learning (convolutional neural networks), object detection/classification, multi-task learning, large-scale distributed training, multi-sensor fusion, etc.
  • Train machine learning and deep learning models on a computing cluster to perform carbon mapping and anomaly detection.
  • Optimize models and the associated preprocessing/post processing code to run efficiently on large amounts of geospatial data.
  • Help develop a research roadmap to deliver on open questions and advance the performance of best-in-class models.
  • Advocate for scientific and engineering best practices.

You will:

  • Have strong software engineering practices and are very comfortable with Python programming, debugging/profiling, and version control.
  • Be very comfortable in cluster environments and understands the related computer systems concepts (CPU/GPU interactions/transfers, latency/throughput bottlenecks during training of neural networks, CUDA, pipelining/multiprocessing, etc).
  • Have a strong understanding of the under the hood fundamentals of deep learning (layer details, back propagation, etc).
  • Have the ability to read and implement related academic literature and experience in applying state of the art deep learning models to remote-sensing data or a closely related area.
  • Have familiarity with remote-sensing data such as satellite imagery, LIDAR, and radar.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Classification CUDA Deep Learning Engineering GPU Lidar Machine Learning Python Radar Research

Regions: Remote/Anywhere North America
Country: United States
Job stats:  7  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.