Senior Data Engineer (E-commerce)

New York, NY

Peloton Interactive, Inc. logo
Peloton Interactive, Inc.
Apply now Apply later

Posted 4 weeks ago

Peloton is looking for a Data Engineer to build our e-commerce Data Pipelines and increase the data integrity of e-commerce data models. You will work with multiple teams of passionate and skilled data engineers, architects, and analysts responsible for building batch and streaming data pipelines that process data daily and support all of the e-commerce reporting and ERP Integrations needs across the organization. 

Peloton is a cloud-first engineering organization with all of our data infrastructure in AWS leveraging EMR, AWS Glue, Redshift, S3, Spark.  You will be interacting with many business teams including finance, analytics, enterprise systems, and partner to scale Peloton’s e-commerce data infrastructure for future strategic needs.

RESPONSIBILITIES

Help build a culture of quality

  • Assume technical responsibility for new services and functionality, lookout for opportunities for platform improvement, and work with engineers to scale our production systems.
  • Identify and lead technical initiatives to build clean, robust, and performant data applications.
  • Contribute to the adoption of software architecture and new technologies.

Mentorship

  • Lead, coach, pair with, and mentor e-commerce data software engineers.
  • Mentor data engineers from diverse backgrounds to nurture a culture of ownership, learning, automation, re-use, and engineering efficiency through the use of software design patterns and industry best practices.
  • Engage in code reviews helping maintain our coding standards.
  • Be a leader within your team and the organization.

Facilitate the on-time completion of large projects

  • Understand the data needs of different stakeholders across multiple business verticals including Business Intelligence, Finance, Enterprise System.
  • Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data.
  • Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
  • Design best practices for big data processing, data modeling.
  • Lead architecture meetings and technical discussions with the focus of reaching consensus and best practice solutions. 
  • Break down tasks for other engineers and offer guidance to other engineers on the team when they are blocked.
  • Achieve on-time delivery without compromising quality.

QUALIFICATIONS

  • 8+ years of relevant experience including e-commerce and data engineering
  • Good active listening skills, the ability to empathize with stakeholders, and other engineers.
  • Experience in a high-paced, high-growth environment working with deadlines and milestones.
  • Comfortable with ambiguity; you enjoy figuring out what needs to be done.
  • Senior-level with at least one modern programming language and can learn anything you don't already know to get the job done.
  • Excellent time management skills.
  • Have a solid understanding of clean data design principles.
  • Experience mentoring engineers with the team-focused mentality for success.
  • Excellent knowledge about databases, such as PostgreSQL and Redshift.
  • Has experiences with GIT, Github, JIRA, and SCRUM.
  • 2+ years in building a data warehouse and data pipelines. Or, 3+ years in data-intensive engineering roles.
  • Experience with big data architectures and data modeling to efficiently process large volumes of data.
  • Background in ETL and data processing, know how to transform data to meet business goals.
  • Experience developing large data processing pipelines on Apache Spark.
  • Strong understanding of SQL and working knowledge of using SQL(prefer PostgreSQL and Redshift)  for various reporting and transformation needs.
  • Experience with distributed systems, CI/CD (ex: Jenkins) tools, and containerizing applications (ex: Kubernetes).
  • Familiar with at least one of the programming languages: Python, Java.
  • Comfortable with Linux operating system and command-line tools such as Bash.
  • Familiar with REST for accessing cloud-based services.
  • Excellent communication, adaptability, and collaboration skills.
  • Experience running an Agile methodology and applying Agile to data engineering.

BONUS POINTS

  • Familiar with the AWS ecosystem, including RDS, Redshift, Glue, Athena, etc.
  • Has experiences with Apache Hadoop, Hive, Spark, and PySpark.

ABOUT PELOTON:

Founded in 2012, Peloton is a global interactive fitness platform that brings the energy and benefits of studio-style workouts to the convenience and comfort of home. We use technology and design to bring our Members immersive content through the Peloton Bike, the Peloton Tread, and Peloton Digital, which provide comprehensive, socially-connected fitness offerings anytime, anywhere. We believe in taking risks and challenging the status quo by continuously innovating and improving. Our team is made up of passionate brand ambassadors, and we know that together, we go far.

Headquartered in New York City, with offices, warehouses and retail showrooms in the US, UK and Canada, Peloton is changing the way people get fit. Peloton has been named to many prestigious industry lists, including Fast Company's Most Innovative Companies, CNBC's Disruptor 50, Crain's New York Business' Tech25 and Fast50, as well as TIME's Genius Companies. Visit www.onepeloton.com/careers to learn more about joining our team.

Job tags: AWS Big Data Business Intelligence Distributed Systems Engineering ETL Finance Hadoop Java Kubernetes Linux PySpark Python Redshift Scrum Spark SQL