Marketing Technology Data Engineer
Novato, California, United States
Take-Two Interactive Software, Inc.
Take-Two Interactive Software is a leading game publisher, creating games through its labels, Rockstar Games, 2K, Private Division, and Social Point.Marketing Technology Data Engineer - Ingestion
Location: Take-Two (Novato, CA)
Job Category: Marketing Technology
Reporting into: Director, Marketing Technology
Job Summary
The Marketing Technology team works alongside the Labels (2K, Private Division, Rockstar Games, Social Point) Marketing, Community and PR teams, managing and supporting the underlying technology stack used for Paid, Owned and Earned Marketing activity.
The primary purpose of the role is to enable and further empower these teams to do what they do best – produce outstanding content. This means helping streamline their processes, alleviate bottlenecks and provide data for meaningful metrics, leveraging technology.
Working as part of the MarTech team the Marketing Technology Data Engineer (MTDE) must have a broad range of skills. The responsibilities include the ability to develop robust end to end data pipelines of disparate data into consistent usable data. This role requires an understanding of big data pipelines and experience getting them operating without the need for manual intervention.
The role requires a fast-paced and agile approach, with a strong focus on delivery and enabling change within a creative and rapidly changing environment. It’s demanding, but incredibly rewarding as you can directly see the results of your efforts reflected in the teams you’re working with, whilst working with leading edge technology and Marketing strategies.
A successful MTDE will-
Within the first 90 days: Learn the inventory of current data pipelines and develop a familiarity of the purpose and audience they serve. Take ownership of the operational management of several pipelines across the portfolio.
Within 180 days: Take full ownership of several projects driving them toward completion: implementing a standard approach to inbound data pipelines that deliver reliable data to the end systems
Essential Functions
- Develop and manage stable, scalable data pipelines that cleanse, structure and integrate disparate big data sets into a readable and accessible format for end user analyses and targeting using stream and batch processing architectures.
- Develop and improve the current data architecture, data quality, monitoring and data availability.
- Collaborate with the labels to incorporate new data sources into the growing data model.
- Develop data quality framework to ensure delivery of high-quality data and analyses to stakeholders.
- Overall responsibility for maintenance of enterprise wide data dictionary.
- Define and implement monitoring and alerting policies for data solutions.
- Comfort in working with business customers to gather requirements and gain a deep understanding of varied datasets.
Desired Skills and Experience
- Experience with large data sets and distributed computing (Hive/Hadoop)
- 4+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
- 4+ years of hands-on experience working with AWS technologies stack including Redshift, RDS, S3, EMR, Treasure Data, Snowflake or similar solutions build around Hive/Spark etc.
- 4+ years of hands-on experience in using advanced SQL queries (analytical functions), experience in writing and optimizing highly efficient SQL queries.
- Proven track record of delivering big data solutions – batch and real-time.
- Ability to design, develop and automate scalable ETL and reporting solutions that transforms data into accurate and actionable business information.
- Experienced in testing and monitoring data for anomalies and rectifying them.
- Knowledge of software coding practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
Technologies you will work with
- Presto SQL
- Hive SQL
- Digdag
- Embulk
- Python
Advantageous experience
- Hands on experience with Treasure Data
- Developing solutions using Docker
- Experience with CDPs or DMPs
- GDPR / CCPA compliance experience
- Experience with CRM platforms such as Salesforce
- Familiarity with Marketing Technologies
- Experience working in an agile environment
- Previous gaming industry experience
Tags: Agile AWS Big Data Data pipelines Docker ELT ETL Hadoop Pipelines Python Redshift Snowflake Spark SQL Testing
Perks/benefits: Flex vacation
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Data Science Manager jobs
- Open Lead Data Analyst jobs
- Open MLOps Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Data Manager jobs
- Open Sr Data Engineer jobs
- Open Power BI Developer jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Business Intelligence Developer jobs
- Open Junior Data Scientist jobs
- Open Data Scientist II jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Sr. Data Scientist jobs
- Open Business Data Analyst jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Manager, Data Engineering jobs
- Open Azure Data Engineer jobs
- Open Data Quality Analyst jobs
- Open Data Product Manager jobs
- Open Junior Data Engineer jobs
- Open Principal Data Scientist jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Java-related jobs
- Open Privacy-related jobs
- Open Data visualization-related jobs
- Open Finance-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Snowflake-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Kubernetes-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open LLMs-related jobs
- Open Databricks-related jobs
- Open Data warehouse-related jobs