Data Engineer

Palo Alto, California, United States

Applications have closed

Jerry

Jerry AllCar™ App: Get Car Insurance, Loan Refinance, Repair Estimates, Driving Score and more.

View company page

We’d love to hear from you if you like:

  • Start-up energy working with a brilliant and passionate team
  • Exponential growth (5 straight quarters of 50-100%+ quarter over quarter growth)
  • Flat structure and access to senior leadership for continuous mentorship
  • Meritocracy - we promote based on performance, not tenure
  • Rockstar teammates. You will be working with a strong team with prior work experience at Amazon, Microsoft, NVIDIA, Alibaba, etc.

About jerry.ai:

Jerry.ai is an AI powered personal concierge for your car and home. Our mission is to make all aspects of car & home ownership hassle-free and effortless. We are starting with car insurance. Enabled by disruptive technologies, jerry.ai has built a one-click experience for saving money on car insurance. Since our product launch, we have been growing really fast for the past 15 months and our users love the product (rating 4.5 in the app store).

Jerry.ai is founded by serial entrepreneurs who previously built and scaled YourMechanic (“Uber for car repair,” the nation’s largest on-demand car repair company). We are backed by Y-combinator, SV Angel, Funders Club, and many other prominent Silicon Valley Investors.

About the role:

We are looking for a Data Engineer who is passionate and motivated to make an impact in creating a robust and scalable data platform. In this role, you will have ownership of the company’s core data pipeline that powers our top line metrics. You will also leverage data expertise to help evolve data models in various components of the data stack. You will be working on architecting, building, and launching highly scalable and reliable data pipelines to support the company’s growing data processing and analytics needs. Your efforts will allow access to business and user behavior insights, leveraging the data to fuel other functions such as Analytics, Data Science, Operations and many others.

Responsibilities:

  • Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth
  • Consistently evolve data model & data schema based on business and engineering needs
  • Implement systems tracking data quality and consistency
  • Develop tools supporting self-service data pipeline management (ETL)
  • SQL and MapReduce job tuning to improve data processing performance

Requirements:

  • 2+ years of data engineering experience within a rigorous engineering environment
  • Proficient in SQL, specially with Postgres dialect.
  • Expertise in Python for developing and maintaining data pipeline code.
  • Experience with Apache Spark and PySpark library (experience with AWS extension of PySpark is a plus).
  • Experience with BI software (preferably Metabase or Tableau).
  • Experience with Hadoop (or similar) Ecosystem.
  • Experience with deploying and maintaining data infrastructure in the cloud (experience with AWS preferred).
  • Comfortable working directly with data analytics to bridge business requirements with data engineering

Locations:

  • Toronto
  • Boston
  • Palo Alto

Tags: AWS Data Analytics Data pipelines Engineering ETL Hadoop Metabase Pipelines PostgreSQL PySpark Python Spark SQL Tableau

Perks/benefits: Career development Startup environment

Region: North America
Country: United States
Job stats:  11  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.