Senior Data Engineer

Remote, USA

Applications have closed

Density

Density shows you how people use your space and helps you improve your workplace ROI.

View company page

At Density, we build the infrastructure necessary to measure how people use space. The result of distributing this platform is lower emissions, less waste, better access, safer buildings, and better designed cities. It is a long term pursuit and one we could use your help achieving. 
What is this About?
We’re looking for a savvy Data Engineer, or as DBT Labs calls them Analytics Engineers, to join a growing team of Data ‘Heads.  We have the platform established.  Our stack consists of Fivetran, Snowflake, and Tableau.  We containerize our data pipelines, run them on AWS ECS, and continuously deploy them using GitHub Actions.  We operate mainly under ELT. This hire will be responsible for building out what the T looks like in Snowflake using DBT, SQL, and Python as their Bread ‘n Butter to support ongoing analytics initiatives from other company organizations like Customer Success, Product, Marketing, Data Science, and etc.   The candidate should be adept at working with cross functional team members and leaders recommending appropriate data solutions in a self-directed manner.  That “right” candidate shouldn’t expect to keep up with the pace of growth but get ahead of the curve to take Data & Analytics to the next level at Density.

The ideal candidate will have:

  • 3 to 5 years work experience equivalence
  • Strong SQL is required - this is the main BnB, and a solid hire should know how to do Window Functions using ANSI 1992 SQL but won’t  
  • Strong DBT skill is required - the candidate should know how to use the DBT CLI to create a DBT Project, operate DBT Models, how to materialize different results, create DBT table dependencies, and have an understanding of how DBT’s elusive Python API may work 
  • Ability to use version control like `git` is required - it would be surprising to know the candidate cannot do the basics of `git` on GitHub
  • Reasonably solid Python coding ability is highly desirable - knowing what Context Managers and Decorators are and NOT knowing about Magic methods can only help
  • Reasonably solid Tableau skill is highly desirable
  • Familiarity with data warehouses such as Snowflake or BigQuery is helpful - but we use Snowflake, so having Snowflake marked off is definitely preferred
  • Familiarity with Docker is helpful 

What we bring:

  • A team hailing from places like Apple, Amazon, LinkedIn, Stripe, Cisco Meraki, Hashi Corp, WeWork, NASA, & beyond.
  • $227 million raised from investors like Kleiner Perkins, Founders Fund, and Upfront Ventures.
  • A work environment full of fun, smart, talented, dedicated, and truly kind teammates.
  • Our values are be humble, seek feedback, and always solve the fundamental problem. These values are hard to achieve but we aspire to them daily.
  • Excellent benefits including medical, dental, vision, mental and reproductive health,  401K, equity, Flex Fridays, Remote Friendly, unlimited and mandatory PTO and more.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: APIs AWS BigQuery Data pipelines Docker ECS ELT FiveTran Git GitHub Pipelines Python Snowflake SQL Tableau

Perks/benefits: Health care Unlimited paid time off

Regions: Remote/Anywhere North America
Country: United States
Job stats:  9  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.