Senior Data Engineer, Data Platform (CartaX)
Seattle, Washington, United States
The Company You’ll Join
At Carta we create owners and make private markets liquid.
We live in a world where some people live on the equity stack and enjoy exponential wealth growth and preferential tax treatment; others live on the debt stack and may work their entire lives for a company and retire only with the cash they’ve managed to save from their paychecks. Our contribution to solving the wealth inequality problem is moving people from the debt stack (payroll) to the equity stack. By making it as easy to issue equity to employees as it is to put them on payroll, we can create more owners.
At Carta, we are helpful, transparent, fair, and kind. We are relentless executors, unconventional thinkers, and masters of our craft.
To learn more, here is what one of our investors wrote about leading our Series F.
The Team You’ll Work With
The Data team at Carta is working on becoming a large part of the decision making for the company. We believe that our unique data sets set us apart and help us succeed as a data-driven company. Members of the Data team are working on understanding and making sense of data while partnering with product and business teams to drive direction with data. The team is currently composed of professionals in the Data Science/Machine Learning space, advanced Data Analytics space, and Data engineering. We work together and partner with Cartans across the company to get our work done, and we constantly think about how we can improve. We also like to develop new product ideas based on data.
The Problems You’ll Solve
As a member of this team you will support CartaX, our private equity trading platform, in building and scaling pipelines to make data accurate, accessible, and secure. You will build systems that allow CartaX team members to answer important questions in a self-service manner, and allow analysts and data scientists to quickly analyze and prototype new ideas. You will have primary responsibility for ensuring the CartaX team has the data they need when they need it. Examples of responsibilities will include:
- Build resilient data pipelines based on internal and external data sources
- Implement data security practices to support CartaX and meet the requirements of Compliance & Legal and InfoSec
- Manage Airflow, DBT, and Looker instances, and other tooling dedicated to CartaX
- Own all data ingestion pipelines from OLTP, Kafka, and NoSQL data sources
- Partner with the rest of the team to onboard/offboard users, and continue building out security infrastructure to make sure our data is safe
- Partner with the rest of the team on prototyping and building scalable products driven by the data team
- Constantly identify opportunities for providing self-service tooling to our internal partners
The Impact You’ll Have
CartaX Equity Trading platform is an industry changing product that was launched in January of 2021. You will have an opportunity to define and build the pipelines and infrastructure to support a high-visibility product. In addition you will work across multiple teams to provide important data sets that drive product development and business decisions.
About You
We are looking for candidates who have a minimum of 7 years of working experience. Successful candidates in this role will always look for a balance between fast delivery and building for scale. You don’t follow the status quo but look for ways to improve how we do things. You are able to talk to technical and business users and explain your work, and are able to be a good partner to your team and to your customers. Building relationships is a priority. Even though our toolstack (Airflow, DBT, Redshift, Looker) is a good start you will always be in the know on the latest and greatest technology we could utilize. You concentrate on automation and self service. You are also excited to build new products, starting with ideas and all the way to execution. Your previous experience includes:
- Experience building data pipelines and API integrations using Python, Java or Scala
- Experience with the concepts around Data Lake, Data Warehouse and associated technologies (S3, Redshift, etc)
- Experience with distributed data systems such as Airflow, Spark and related technologies
- Strong SQL skills
- Familiarity with data visualization tools (Looker, Tableau, etc)
- Ownership of data pipelines from internal and external sources ending in the data warehouse
- Management of user roles and access to the data, including PII
- Implementing solutions such as Amplitude to allow for faster and more accurate reporting
We are an equal opportunity employer and are committed to providing a positive interview experience for every candidate. If accommodations due to a disability or medical condition are needed, connect with us via email at recruiting@carta.com. As a company, we value fairness, helpfulness, transparency, leadership and build our teams around these values. Check out our careers page to get to know us better as you think about your next step at Carta.
Tags: Airflow Amplitude APIs Data Analytics Data pipelines Data visualization Engineering Kafka Looker Machine Learning NoSQL Pipelines Prototyping Python Redshift Scala Security Spark SQL Tableau
Perks/benefits: Career development Startup environment
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open AI Engineer jobs
- Open Lead Data Analyst jobs
- Open MLOps Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Data Manager jobs
- Open Principal Data Engineer jobs
- Open Power BI Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Business Intelligence Developer jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Data Scientist II jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Business Data Analyst jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Data Quality Analyst jobs
- Open Principal Data Scientist jobs
- Open Data Product Manager jobs
- Open Junior Data Engineer jobs
- Open ETL Developer jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Kubernetes-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs
- Open LLMs-related jobs
- Open DevOps-related jobs