ETL Developer
Canada
StackAdapt
StackAdapt is a top-ranking programmatic advertising platform used by the most exceptional digital marketers.We have an exciting opportunity in the newly formed Enterprise Data Office (EDO) with its mandate to serve the business leaders and data stakeholders at StackAdapt with trusted official reporting and governed self-service analytics. Reporting to the Director of Enterprise Data Office, the ETL Developer will be responsible for end-to-end data pipeline development from ingesting the raw data into the centralized data lake, to transforming the data into business-friendly data models on the Enterprise Data Warehouse that can be easily consumed by the business through Business Intelligence applications and other downstream processes.
This role will assess the data requirements provided by the Manager of Business Data Analysis and collaborate with other team members and the source data owners to design and build automated batch or near-real-time data ingestion pipelines that will ingress the necessary data from the source, which may be a variety of formats and mediums such as relational/non-relationship databases, flat files, and applications, into the centralized data lake. Subsequently, under the guidance of the Data Architect, who will provide the target data models, the ETL developer will build the transformation pipelines necessary to materialize various types of data models into the Enterprise Data Warehouse. The ETL developer will work collaboratively with the Data Architect in the deployment, automation, and orchestration of these newly developed data pipelines, and collaborate with the BI engineers to ensure the smooth connection of these data models into the BI semantic layer so that it can be leveraged easily by business users. Finally, the ETL developer will also assist the Data Architect and the Manager of Business Data Analysis in the maintenance and support of the data pipeline operations as well as the overall EDO environments.
StackAdapt is a Remote-First company. We are open to candidates located anywhere in Canada for this position.
What you'll be doing:
- Build reliable data ingestion pipelines to extract data from a variety of data sources including databases (e.g., RDBMS/NOSQL/file stores), applications (via API), flat files, etc into the Data Lake with appropriate metadata tagging
- Build data transformation pipelines to transform the raw data and materialize the data models designed by the Data Architect into the Enterprise Data Warehouse
- Deploy developed pipelines into production in adherence with deployment best practices to ensure a seamless rollout
- Orchestrate data pipelines via batch, near-real-time, or real-time operations depending on requirements to ensure a seamless and predictable execution
- Support the day-to-day operation of the EDO pipelines as well as the EDO environment by monitoring alerts and investigating, troubleshooting, and remediating production issues
- Work with members of the Enterprise Data Office to ensure the stability and optimization of the data pipelines to meet the required SLA
What you’ll bring to the table:
- Minimum 2 years of experience building and deploying data pipelines
- Hands-on experience with at least one cloud-based data warehouse e.g., Snowflake, BigQuery; experience with big data formats e.g., Delta Lake, Parquet, Avro would be an asset
- Good knowledge of relational and dimensional data models; able to interpret and understand physical data models and apply data rules and constraints as required to create data pipelines; prior data warehousing architecture knowledge would be an asset
- Hands-on experience building ETL/ELT data pipelines via custom-coded scripts (e.g., Spark, Python, JAVA, SQL stored procedures) OR via integration platforms (e.g., PowerCenter, DataStage, Talend); by following various standards and best practices such as coding and naming standards, version control, code promotion, testing, and deployment
- Strong verbal and written communication skills as well as excellent collaboration skills are required to participate and engage in highly technical discussions regarding data solutions
- Demonstrated ability to self-learn and master new data tools, platforms, and technologies within a short ramp-up period under conditions of limited formal training and coaching
- Experience with data orchestration in Apache Airflow, Cron, or other schedulers is a strong asset
- Knowledge of microservices architecture and container technology such as Kubernetes and Docker would be a definite asset
- Experience managing data platforms via infrastructure-as-code eg: Terraform would be a strong asset
StackAdapters Enjoy
- Competitive salary
- 401k/RRSP matching
- 3 weeks vacation + 3 personal care days + 1 Culture & Belief day + birthdays off
- Access to a comprehensive mental health care platform
- Health benefits from day one of employment
- Work-from-home reimbursements
- Optional global WeWork membership for those who want a change from their home office
- Robust training and onboarding program
- Coverage and support of personal development initiatives (conferences, courses, etc)
- Access to StackAdapt programmatic courses and certifications to support continuous learning
- Mentorship opportunities with industry leaders
- An awesome parental leave policy
- A friendly, welcoming, and supportive culture
- Our social and team events!
#LI-KR1
StackAdapt is a diverse and inclusive team of collaborative, hardworking individuals trying to make a dent in the universe. No matter who you are, where you are from, who you love, follow in faith, disability (or superpower) status, ethnicity, or the gender you identify with (if you’re comfortable, let us know your pronouns), you are welcome at StackAdapt. If you have any requests or requirements to support you throughout any part of the interview process, please let our Talent team know.
About StackAdapt
We've been recognized for our diverse and supportive workplace, high performing campaigns, award-winning customer service, and innovation. We've been awarded:
Ad Age Best Places to Work 2024G2 Top Software and Top Marketing and Advertising Product for 2024Campaign’s Best Places to Work 2023 for the UK2024 Best Workplaces for Women and in Canada by Great Place to Work®#1 DSP on G2 and leader in a number of categories including Cross-Channel Advertising
#LI-Remote
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture Avro Big Data BigQuery Business Intelligence Data analysis Data pipelines Data warehouse Data Warehousing Docker ELT ETL Java Kubernetes Microservices NoSQL Parquet Pipelines Python RDBMS Snowflake Spark SQL Talend Terraform Testing
Perks/benefits: Career development Competitive pay Conferences Flex vacation Health care Parental leave Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Marketing Data Analyst jobs
- Open Lead Data Analyst jobs
- Open MLOps Engineer jobs
- Open Data Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Principal Data Engineer jobs
- Open Power BI Developer jobs
- Open Data Scientist II jobs
- Open Junior Data Scientist jobs
- Open Sr Data Engineer jobs
- Open Business Data Analyst jobs
- Open Data Analytics Engineer jobs
- Open Product Data Analyst jobs
- Open Business Intelligence Developer jobs
- Open Data Analyst Intern jobs
- Open Sr. Data Scientist jobs
- Open Senior Data Architect jobs
- Open Big Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Principal Data Scientist jobs
- Open Azure Data Engineer jobs
- Open Data Quality Analyst jobs
- Open Junior Data Engineer jobs
- Open Research Scientist jobs
- Open Data quality-related jobs
- Open GCP-related jobs
- Open Business Intelligence-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open PhD-related jobs
- Open Deep Learning-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open PyTorch-related jobs
- Open APIs-related jobs
- Open TensorFlow-related jobs
- Open NLP-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open LLMs-related jobs
- Open Generative AI-related jobs
- Open CI/CD-related jobs
- Open Kubernetes-related jobs
- Open Hadoop-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open Databricks-related jobs