Marketing Technology Data Engineer

Novato, California, United States

Applications have closed

Take-Two Interactive Software, Inc.

Take-Two Interactive Software is a leading game publisher, creating games through its labels, Rockstar Games, 2K, Private Division, and Social Point.

View company page

Marketing Technology Data Engineer 

Location:  Take-Two (Novato, CA) 

Job Category:  Marketing Technology 

Reporting into:   Director, Marketing Technology 

Job Summary 

The Marketing Technology team works alongside the Labels (2K, Private Division, Rockstar Games, Social Point) Marketing, Community and PR teams, managing and supporting the underlying technology stack used for Paid, Owned and Earned Marketing activity. 

The primary purpose of the role is to enable and further empower these teams to do what they do best – produce outstanding content. This means helping streamline their processes, alleviate bottlenecks and provide data for meaningful metrics, leveraging technology.  

Working as part of the MarTech team the Marketing Technology Data Engineer (MTDE) must have a broad range of skills. The responsibilities include the ability to develop robust end to end data pipelines of disparate data into consistent usable data. This role requires an understanding of big data pipelines as well as the activation endpoints commonly used by marketing technology.

  

The role requires a fast-paced and agile approach, with a strong focus on delivery and enabling change within a creative and rapidly changing environment. It’s demanding, but incredibly rewarding as you can directly see the results of your efforts reflected in the teams you’re working with, whilst working with leading edge technology and Marketing strategies.  

A successful MTDE will- 

Within the first 90 days: Learn the inventory of current outbound data pipelines and develop a familiarity of the purpose and audience they serve. Take ownership of the operational management of several outbound pipelines across the portfolio. 

Within 180 days: Take full ownership of several projects driving them toward completion: implementing a standard approach to outbound data pipelines that deliver reliable data to the end systems

Essential Functions 

Develop and manage stable, scalable data pipelines that cleanse, structure and integrate disparate big data sets into a readable and accessible format for end user analyses and targeting using stream and batch processing architectures. 

Develop and improve the current data architecture, data quality, monitoring and data availability. 

Develop data quality framework to ensure delivery of high-quality data and analyses to stakeholders. 

Develop and implement robust outbound data pipelines that interact with the marketing technology stack. 

Define and implement monitoring and alerting policies for data solutions. 

Comfort in working with business customers to gather requirements and gain a deep understanding of varied datasets. 

Desired Skills and Experience  

Experience with large data sets and distributed computing (Hive/Hadoop) 

5+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 

4+ years of hands-on experience working with AWS technologies stack including Redshift, RDS, S3, EMR, Treasure Data, Snowflake or similar solutions build around Hive/Spark etc. 

4+ years of hands-on experience in using advanced SQL queries (analytical functions), experience in writing and optimizing highly efficient SQL queries.

Experience working with the marketing technolgy data stack (SFMC, Google, Facebook). 

Proven track record of delivering big data solutions – batch and real-time. 

Ability to design, develop and automate scalable ETL and reporting solutions that transforms data into accurate and actionable business information. 

Experienced in testing and monitoring data for anomalies and rectifying them. 

Knowledge of software coding practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.  

Technologies you will work with 

Presto SQL 

Hive SQL 

Digdag 

Embulk 

Python 

Advantageous experience 

Hands on experience with Treasure Data 

Developing solutions using Docker 

Experience with CDPs or DMPs 

GDPR / CCPA compliance experience 

Experience with CRM platforms such as Salesforce 

Familiarity with Marketing Technologies 

Experience working in an agile environment 

Previous gaming industry experience 

Tags: Agile AWS Big Data Data pipelines Docker ELT ETL Hadoop Pipelines Python Redshift Snowflake Spark SQL Testing

Region: North America
Country: United States
Job stats:  6  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.