Senior Data Engineer (Remote)

Boston, MA

Applications have closed
Who we are
Drizly is the world’s largest alcohol marketplace and the best way to shop beer, wine and spirits. Our customers trust us to be part of their lives – their celebrations, parties, dinners and quiet nights at home. We are there when it matters - committed to life’s moments and the people who create them. We partner with the best retail stores in over 1650 cities across North America to serve up the best buying experience. Drizly offers a huge selection and competitive pricing with a side of personalized content. That is what we do.  Who we are is a different story.
We are more than just another tech company. There is an intellectual curiosity that occurs at Drizly.  We have a desire to question, to understand, to figure it out. Bottom line, we solve it. We value not just the truth but the process to get to the truth, to deliberate, decide and then act.  Most importantly, we care. We care about our customer. We care about our company. We care about our team. There will be long days and incredible challenges.
We are blazing a trail in an industry that hasn’t changed in nearly a century, and that doesn’t scare us (well, not all the time) -and even when it does, it doesn’t stop us, it energizes us.
Do you see yourself here?  Read on.
Who you are
You thrive on making analysts’ lives better by implementing reliable data pipelines, automating processes, and giving technical support. You understand the importance of DataOps and DevOps principles, and how they can be applied to a data team. You see yourself as a cross between a data engineer and a DevOps engineer. You operate in a collaborative nature, enjoy learning new cutting edge technologies, and keep your finger on the pulse in the world of Data Engineering and the modern data stack.
You are comfortable working with data at the terabyte (or greater) scale, and have a consistent track record implementing batch and streaming solutions. You enjoy building and optimizing existing data pipelines with a focus on continuous improvement and software development standard methodologies. You love working with data and have experience in data modeling, data access, and data storage techniques. Your toolkit includes SQL for transformation, python for ETL and scripting, Kafka and/or Kinesis for streaming jobs, and the ability to solve distributed systems, optimize queries, and tune tables. You have experience interacting with REST APIs, SDKs, HTTP, FTP and OAuth to facilitate the flow of data.
What the role is
The Senior Data Engineer on the Data Platform team will accelerate the rest of the analytics team and Drizly. From building a data processing framework to help support billions of events, to integrating new data pipelines, Drizly’s most critical decisions will rely on a scalable, well-managed foundation of data. This tech savvy individual will be the account admin of tools across our data stack to enable analytics engineers to do their jobs more efficiently. 
The Senior Data Engineer is a critical and foundational member of the Data Platform team, and will work closely with analytics engineering, data science, infrastructure and platform engineering, and security. In short, the Senior Data Engineer is a tech savvy individual who will help define the building blocks for Drizly’s Data Platform capabilities internal and external to our organization.

In this role you will:

  • Admin tools across our data platform (Snowflake, Looker, Fivetran, Census, etc.)
  • Maintain and improve CI/CD pipelines for dbt and Looker
  • Lead DevEx/DevOps initiatives within the analytics organization
  • Utilize terraform to stand up infrastructure (Snowflake, AWS resources)
  • Be the expert in our tools and go-to person for technical support on analytics (dbt, Snowflake, Fivetran, Dagster, Looker, Spectacles, Census, etc.)
  • Onboard new analytics engineers to our data platform, and create asynchronous trainings to scale yourself 
  • Partner with the Analytics Engineering team to ensure the performance and reliability of our data warehouse
  • Create data frameworks to help with data quality, data integrity, data integration, and self-service
  • Build and maintain production data pipelines in Dagster
  • Be responsible for data quality, data monitoring, and pipeline health -- ensuring incidents and outages are mitigated appropriately

The Other Stuff:

  • Competitive Salary
  • One-on-one professional coaching with an external expert
  • Health, Dental and Vision Insurance
  • Flexible vacation policy
  • 401(K) Plan with Employer Match
  • Added perks
You do you.
Drizly is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status
For Colorado-based roles: The salary/hourly rate range for this role is $110,500 per year - $130,000 per year. You will be eligible to participate in Drizly's bonus program, and may be offered an equity award, commissions, & other types of comp. You will also be eligible for various benefits. More details about our company benefits can be found at the following link: https://drizly.com/careers.
BEFORE YOU APPLY...
We ask that you please remove all identifying information from your resume before you upload it on the next page in an effort to help us remove unconscious bias from our resume review process. Drizly is committed to cultivating an inclusive environment where a diverse group of people can and want to do their best work, and that starts with our hiring practices. You must reside in the United States to be considered for this position. Additionally, at time of hire we will use E-Verify to confirm your eligibility to work in the United States. You can learn more about e-verify by clicking here. Identifying information includes your name, photos, LinkedIn URL, email address and more.

Tags: APIs AWS CI/CD Dagster Data pipelines DevOps Distributed Systems Engineering ETL FiveTran Kafka Kinesis Looker Pipelines Python Security Snowflake SQL Streaming Terraform

Perks/benefits: 401(k) matching Career development Competitive pay Equity Flex hours Flex vacation Health care Insurance Salary bonus Team events

Regions: Remote/Anywhere North America
Country: United States
Job stats:  11  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.