Senior Data Engineer
New York City (Remote)
Applications have closed
Narrativ
Narrativ makes it simple for a creator to earn a full time income recommending products you love. With Narrativ, you don’t need millions of subscribers or constant sponsored content. We give creators the tools to triple earnings and see exactly...About Narrativ
Narrativ is building the commerce and payments infrastructure for the Creator Economy. We make it simple for creators to earn a full time income selling products from brands they love.
50 million people work in the Creator Economy today, but 95% don’t make enough to pay their monthly rent. Creators feel more pressure than ever to keep adding new revenue streams, from fan subscriptions to tips, and churn out a constant stream of new content just to get by. Narrativ gives creators the opportunity to earn meaningful, reliable income in commerce without needing to have millions of fans.
Narrativ is changing this status quo - our platform gives creators commerce, not affiliate tools, to manage their businesses. Creators who were making less than $25K a year through affiliate are now earning $100K+ through commerce on Narrativ. These creators don’t need millions of followers - the majority of creators earning a full time income on Narrativ have between 50-300K subscribers.
Narrativ has been recognized as a World Economic Forum Technology Pioneer and one of Fast Company World’s Most Innovative Companies alongside giants like Google, Microsoft, and Slack. Narrativ is committed to building a diverse team in a technology ecosystem that is anything but - the team is 42% women and 60% people of color today.
Do you share our vision for making the creator economy dream real for millions of people? Come join our team!
The Position
You will be a software engineer in the Data Engineering team. You will work with stakeholders like engineers from other teams, product managers, and solution engineers to design, create, and improve data pipelines that are responsible for ingesting, transforming, and exporting data to/from both internal and external systems. In addition to internal customers like sales, product, machine learning, and other application teams, we also send data to external customers. You will build and maintain APIs for data access and validation used by other teams. You will work to improve data quality, improve trust in our data, and help our organization become more data driven.
Requirements
- 5+ years of software development
- 3+ years of experience designing, building and maintaining enterprise data pipelines and/or warehouses
- Demonstrable knowledge of big data databases (e.g. Snowflake or BigTable) as well as SQL
- Experience with message processing (e.g., Kafka, RabbitMQ)
- Experience working closely with the product team to help prioritize the best solutions to the largest problems.
- Reliable organization and communication skills and follow through on verbal and written commitments.
- Persistent approach to problem-solving and ability to see solutions through to completion even in the face of complexities or unknowns. A proactive mindset that drives you to pursue solutions rather than waiting for the answers to come to you.
- Attention to detail in work and ability to identify ambiguities in specifications.
- Exceptional written and verbal communication skills, especially when communicating trade-offs between technical decisions to non-technical colleagues.
- Flexibility to work and maintain focus in an evolving environment.
- A collaborative personality and a commitment to helping others.
Narrativ Technology and Data Stack
Our data is centralized into a Snowflake database. We replicate data, mainly using Fivetran, from our transaction databases, Google Analytics, Salesforce, and Neptune (graph database). Our events are also streamed into Snowflake after processing through a Storm cluster that uses DynamDB for persistence. We use DBT to build our ELT DAGs and publish dashboards on Looker. We use libraries like Marshmallow and Great Expectations for data validation.
Narrativ’s systems are implemented as a modern microservices architecture running on Linux servers hosted in AWS. We use Kubernetes to manage our containers, we use Flask to construct our Web interfaces. We build interactivity in our web interfaces using React. We use Linux, and in particular Debian, Ubuntu, and Alpine distros.
Our favorite programming languages are Python 3, Scala, Go, Elixir, and, of course, TypeScript and JavaScript. We stash our code in GitHub.
We test each language with an appropriate unit testing tool - JUnit, PyTest, ScalaTest, ExUnit, and Jasmine. We use Jenkins to run our builds and tests.
We also use Airflow, DataDog, Fivetran, Jira, LaunchDarkly, and StoryBook.
Tags: Airflow APIs AWS Big Data Bigtable Data pipelines Elixir ELT Engineering FiveTran Flask GitHub JavaScript Jira Kafka Kubernetes Linux Looker Machine Learning Microservices Pipelines Python React Scala Snowflake SQL Testing TypeScript
Perks/benefits: Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Marketing Data Analyst jobs
- Open MLOps Engineer jobs
- Open Junior Data Scientist jobs
- Open AI Engineer jobs
- Open Data Engineer II jobs
- Open Senior Data Architect jobs
- Open Power BI Developer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Analytics Engineer jobs
- Open Sr Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Principal Data Engineer jobs
- Open Product Data Analyst jobs
- Open Business Data Analyst jobs
- Open Data Quality Analyst jobs
- Open Data Manager jobs
- Open Sr. Data Scientist jobs
- Open Big Data Engineer jobs
- Open Data Scientist II jobs
- Open Business Intelligence Developer jobs
- Open Data Analyst Intern jobs
- Open ETL Developer jobs
- Open Principal Data Scientist jobs
- Open Azure Data Engineer jobs
- Open Data Product Manager jobs
- Open Business Intelligence-related jobs
- Open Data quality-related jobs
- Open Privacy-related jobs
- Open Data management-related jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open Deep Learning-related jobs
- Open APIs-related jobs
- Open PyTorch-related jobs
- Open PhD-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open Snowflake-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Data warehouse-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs
- Open LLMs-related jobs
- Open DevOps-related jobs
- Open CI/CD-related jobs