Staff Data Engineer

Remote, Americas or EMEA

Applications have closed

RevenueCat

RevenueCat makes it easy to build, analyze, and grow in-app purchases and subscriptions on iOS, Android, and the web – no server code required. Get started for free.

View company page

About us:

RevenueCat makes building, analyzing and growing mobile subscriptions easy. We launched as part of Y Combinator's summer 2018 batch and today are handling more than $1.2B of in-app purchases annually across thousands of apps.

We are a mission driven, remote-first company that is building the standard for mobile subscription infrastructure. Top apps like VSCO, Notion, and ClassDojo count on RevenueCat to power their subscriptions at scale.

Our 50 team members (and growing!) are located all over the world, from San Francisco to Madrid to Taipei. We're a close-knit, product-driven team, and we strive to live our core values: Customer Obsession, Always Be Shipping, Own It, and Balance.

We’re looking for a Staff Data Engineer to join our newly formed data engineering team. As a Staff Engineer, you will be responsible for leading the effort to design, architect and support our entire data platform and will play a key role in defining how our systems evolve as we scale.

About you:

  • You have 8+ years of software engineering experience.
  • You have 5+ years of experience working with and building enterprise-scale data platforms.
  • You have excellent command of at least one of the mainstream programming languages and some experience with Python.
  • You have helped define the architecture, data modeling, tooling, and strategy for a large-scale data processing system, data lakes or warehouses. 
  • You have used workflow management tools (eg: airflow, glue) and have experience maintaining the infrastructure that supports these.
  • You have hands-on experience building CDC-based (Change Data Capture) ingestion pipelines for highly transactional databases. Experience with Postgres and logical replication is a plus.
  • You have a strong understanding of modern data processing paradigms and tooling, OLTP & OLAP database fundamentals.
  • Dimensional modeling and reporting tools like Looker are a plus, but not required
  • You have experience building streaming/real-time data pipelines from a batch architecture approach.

Responsibilities:

  • Help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time
  • Help the team apply software engineering best practices to our data pipelines (testing, data quality, etc)
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, using SQL and AWS technologies
  • Clearly define data ownership & responsibility, audit and compliance framework, and general security of the data lake
  • Partner with product managers, data scientists, and engineers across teams to solve problems that require data 
  • Drive the evolution of our data platform to support our data processing needs and provide frameworks and services for operating on the data 
  • Analyze, debug and maintain critical data pipelines
  • Work with our core infrastructure team to create and improve frameworks that allow derived data to be used in production environments
  • Contribute to standards that improve developer workflows, recommend best practices, and help mentor junior engineers on the team to grow their technical expertise

In the first month, you'll:

  • Get up to speed on our architecture and learn the problem domain
  • Understand our current data requirements and where things stand today
  • Gain understanding of our current data pipelines

Within the first 3 months, you'll:

  • Work with your team to help design and architect our data platform
  • Work with product managers, engineers and data scientists to help come up with a plan to gain consensus on the approach
  • Analyze, debug and maintain critical data pipelines

Within the first 6 months, you'll:

  • Develop thorough understanding of our data platform 
  • Know all the major components of our system and be able to debug complex issues
  • Be able detect bottlenecks, profile, and come up with enhancements
  • Start participating in hiring for the company

Within the first 12 months, you'll:

  • Thoroughly understand our data processing needs and able to spec, architect, and build solutions accordingly
  • Mentor other engineers joining the team

What we offer:

  • $218,000 to $245,000 USD salary regardless of your location
  • Competitive equity in a fast-growing, Series B startup backed by top tier investors including Y Combinator
  • 10 year window to exercise vested equity options
  • Fully remote work environment that promotes autonomy and flexibility
  • Suggested 4 to 5 weeks time off to recharge and focus on mental, physical, and emotional health
  • $2,000 USD to build your personal workspace 
  • $1,000 USD annual stipend for your continuous learning and growth

 

Tags: Airflow AWS Data pipelines Engineering Looker OLAP Pipelines PostgreSQL Python Security SQL Streaming Testing

Perks/benefits: Career development Competitive pay Equity Home office stipend Startup environment

Regions: Remote/Anywhere Africa Europe Middle East North America South America
Job stats:  15  3  0

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.