Senior Data Pipeline Engineer II

South San Francisco, CA

Applications have closed

Freenome

Freenome is a private biotech company focused on developing blood tests to detect cancer early and make screening accessible for everyone.

View company page

This role is open to remote within the US or onsite at our headquarters in South San Francisco.

Why join Freenome?

Freenome is a high-growth biotech company on a mission since 2014 to create tools that empower everyone to prevent, detect, and treat their disease. 

To achieve this mission, Freenome is developing next-generation blood tests to detect cancer in its earliest, most treatable stages using our multiomics platform and machine learning techniques. Our first blood test will detect early-stage colorectal cancer and advanced adenomas.

To fight the war on cancer, Freenome has raised more than $1.1B from leading investors including a16z, GV (formerly Google Ventures), T. Rowe Price, BainCapital, Perceptive Advisors, RA Capital Management, Roche, Kaiser Permanente Ventures, and the American Cancer Society’s BrightEdge Ventures. 

Are you ready for the fight? A ‘Freenomer’ is a mission-driven employee who is fueled by the opportunity to make a positive impact on patients' lives, who thrive in a culture of respect and cross collaboration, and whose work makes a significant impact on the company and their career. Freenomers are determined, patient-centric,  and outcomes-driven. We build teams around divergent expertise, allowing us to solve problems and ascertain opportunities in unique ways. We are dedicated to advancing healthcare, one breakthrough at a time.

About this opportunity:

At Freenome, we are seeking a Senior Data Pipeline Engineer to develop software, data systems and pipelines to combat cancer. You'll be responsible for building a business intelligence platform to shed light on all internally-generated data to help improve and refine our processes, including handling heterogenous data through data warehousing and ETL pipelines. Our systems are built using the latest web software development technologies and methodologies, including those you will decide to use. The ideal candidate is excited to take the lead on major projects and collaborate actively with our world-class team of engineers, scientists, designers, and product managers. You are passionate about building reliable, maintainable, scalable, and fault-tolerant data pipelines, and you will have a significant impact on the continued growth of a high profile technology organization that is changing the landscape on early cancer detection.

The role reports to our engineering management team.

What you’ll do:

  • Design, develop, and deploy reliable, maintainable, scalable, and fault-tolerant data pipelines and services that power our internal experiments and analyses
  • Work with scientists, product managers, and other engineers to solve complex problems in the face of lots of dynamism and uncertainty
  • Collaborate with team members for code and design review
  • Mentor junior engineers and grow our team’s technical expertise
  • Lead and champion data engineering best practices and team culture as a core part of the engineering backbone

Must haves:

  • 5+ years of experience as a part of a software engineering team successfully shipping one or more data pipelines used by multiple people or groups
  • Expertise with a scripting language: Python, Javascript, Ruby, Scala, Go, etc
  • Extensive knowledge of Redshift, BigQuery, or similar technologies
  • Expertise with a variety of data stores: SQL, noSQL, columnar, timeseries, etc
  • Demonstrated experience with handling and transforming large multivariate datasets via ETL pipelines
  • Experience designing and implementing scalable data systems for multiple applications
  • Prior experience with mentoring more junior coworkers
  • Excellent written and verbal communication skills
  • The ability to thrive in an environment where collaboration, communication, and compromise are an expected part of your day-to-day work
  • A mindful, transparent, and humane approach to your work and your interactions with others

Nice to haves:

  • Experience with Python, GCP, Kubernetes, Docker
  • Previous experience with managing projects or technical leadership of teams
  • Understanding of, and practical experience with, statistical and machine learning methods
  • Domain-specific experience in computational biology, genomics or a related field

Benefits and additional information:

The US target range of our base salary for new hires is $157,250 - $215,000. You will also be eligible to receive pre-IPO equity, cash bonuses, and a full range of medical, financial, and other benefits dependent on the position offered.  Please note that individual total compensation for this position will be determined at the Company’s sole discretion and may vary based on several factors, including but not limited to, location, skill level, years and depth of relevant experience, and education. We invite you to check out our career page @ https://careers.freenome.com/ for additional company information.  

Freenome is proud to be an equal opportunity employer and we value diversity. Freenome does not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

Applicants have rights under Federal Employment Laws.  

#LI-Remote

Tags: BigQuery Biology Business Intelligence Data pipelines Data Warehousing Docker Engineering ETL GCP JavaScript Kubernetes Machine Learning NoSQL Pipelines Python Redshift Ruby Scala SQL Statistics

Perks/benefits: Career development Equity Medical leave Startup environment

Regions: Remote/Anywhere North America
Country: United States
Job stats:  5  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.