Machine Learning Engineer (Remote)
San Francisco, California
Applications have closed
We are looking for a Machine Learning Engineer to help build cutting-edge systems for our mission to map and monitor the planet's forests. You will research, design, implement and deploy deep learning models that advance the state of the art in carbon mapping. As a member of the Verification team, you will research, design, implement, and deploy deep learning models that advance the state of the art in carbon mapping. A typical day to day includes reading deep learning code/papers, implementing described models and algorithms, adapting them to our setting, working with other engineers, and incrementally tracking and improving performance. We're looking for engineers who find joy in the craft of building and want to make an impact. Engineers who push forward initiatives by asking great questions, cutting through ambiguity, and organizing to win. Engineers who are relentlessly detail-oriented, methodical in their approach to understanding trade-offs, place the highest emphasis on building, and building quickly.
Who we are:
Pachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachama’s core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide.
We are backed by mission-aligned investors including Breakthrough Energy Ventures, Amazon Climate Fund, Chris Sacca, Saltwater Ventures, and Paul Graham.
Responsibilities:
- Develop state-of-the-art algorithms in one or all of the following areas: deep learning (convolutional neural networks), object detection/classification, multi-task learning, large-scale distributed training, multi-sensor fusion, etc.
- Train machine learning and deep learning models on a computing cluster to perform carbon mapping and anomaly detection.
- Optimize models and the associated preprocessing/post processing code to run efficiently on large amounts of geospatial data.
- Help develop a research roadmap to deliver on open questions and advance the performance of best-in-class models.
- Advocate for scientific and engineering best practices.
You will:
- Have strong software engineering practices and are very comfortable with Python programming, debugging/profiling, and version control.
- Be very comfortable in cluster environments and understands the related computer systems concepts (CPU/GPU interactions/transfers, latency/throughput bottlenecks during training of neural networks, CUDA, pipelining/multiprocessing, etc).
- Have a strong understanding of the under the hood fundamentals of deep learning (layer details, back propagation, etc).
- Have the ability to read and implement related academic literature and experience in applying state of the art deep learning models to remote-sensing data or a closely related area.
- Have familiarity with remote-sensing data such as satellite imagery, LIDAR, and radar.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Classification CUDA Deep Learning Engineering GPU Lidar Machine Learning Python Radar Research
Regions:
Remote/Anywhere
North America
Country:
United States
Job stats:
7
0
0
Categories:
Engineering Jobs
Machine Learning Jobs
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Lead Data Analyst jobs
- Open Data Science Manager jobs
- Open MLOps Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Data Manager jobs
- Open Sr Data Engineer jobs
- Open Power BI Developer jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Business Intelligence Developer jobs
- Open Junior Data Scientist jobs
- Open Data Scientist II jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Sr. Data Scientist jobs
- Open Business Data Analyst jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Manager, Data Engineering jobs
- Open Azure Data Engineer jobs
- Open Junior Data Engineer jobs
- Open Data Quality Analyst jobs
- Open Data Product Manager jobs
- Open Principal Data Scientist jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open GCP-related jobs
- Open Data management-related jobs
- Open Java-related jobs
- Open Privacy-related jobs
- Open Data visualization-related jobs
- Open Finance-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Kubernetes-related jobs
- Open Data governance-related jobs
- Open LLMs-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Data warehouse-related jobs
- Open Databricks-related jobs