Sr. Data Engineer

New York City, United States

Applications have closed

Take-Two Interactive Software, Inc.

Take-Two Interactive Software is a leading game publisher, creating games through its labels, Rockstar Games, 2K, Private Division, and Social Point.

View company page

Who We Are:

Take-Two develops and publishes some of the world's biggest games. Our Rockstar label creates Grand Theft Auto and Red Dead Redemption, two of the most critically acclaimed gaming franchises in history. Our 2K label creates games like NBA 2K, WWE 2K, Bioshock, Borderlands, Evolve, XCOM and the beloved Sid Meier's Civilization. Our Private Division label publishes Kerbal Space Program, Ancestors and The Outer Worlds. 

While our offices (physical and virtual) are casual and inviting, we are deeply committed to our core tenets of creativity, innovation and efficiency, and individual and team development opportunities. Our industry and business are continually evolving and fast-paced, providing numerous opportunities to learn and hone your skills. We work hard, but we also like to have fun, and believe that we provide a great place to come to work each day to pursue your passions. 

The Challenge:

Take-Two Interactive is looking for a passionate, solution-oriented Data Engineer to join a team building the next generation reporting and analytics platform. The ideal candidate is a strong Python developer who has experience building APIs and pipelines to support integrations of internal and external applications.

The ideal candidate relishes working with large volumes and diverse types of data, enjoys the challenge of highly complex technical contexts, and, above all else, is convinced in the value of data for better decision-making.

The Data Engineer will support and collaborate with architects, data analysts and data scientists and will ensure efficient data delivery architecture is consistent throughout ongoing projects. They must be proactive and comfortable supporting the data needs of multiple teams. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced environment.

What you will take on

  • Provide thought leadership and collaborate with other team members to continue to scale our architecture, taking into account the needs of today while remaining flexible enough to evolve for the needs of tomorrow
  • Participate in all phases of SDLC - requirements, design, and development through testing, deployment, maintenance and support.
  • Develop and manage stable, scalable data pipelines that cleanse, structure and integrate disparate big data sets into a readable and accessible format for end user analyses and targeting using stream and batch processing architectures.
  • Maintain API based ETL/ELT processes from multi source raw data collection to reporting/visualization.
  • Collaborate with Data Scientists to implement advanced analytics algorithms that exploit our rich data sets for statistical analysis, prediction, clustering and machine learning
  • Develop data quality framework to ensure delivery of high-quality data and analyses to stakeholders.
  • Develop and support continuous integrations build and deployment processes which use Jenkins, Docker, Git, etc.
  • Define and implement monitoring and alerting policies for data solutions.

What you bring

  • 4+ years of professional experience in Python.
  • 4+ years of hands-on experience in using advanced SQL queries (analytical functions), experience in writing and optimizing highly efficient SQL queries.
  • Experience integrating with 3rd party APIs.
  • Experience with building out an ETL pipeline.
  • Experience of working in AWS environment highly desirable.
  • Comfort in working with business customers to gather requirements and gain a deep understanding of varied datasets.
  • Experienced in testing and monitoring data for anomalies and rectifying them.
  • Knowledge of software coding practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
  • Bachelor’s degree or equivalent in an engineering or technical field such as Computer Science, Information Systems, Statistics, Engineering, or similar

Preferred Qualifications

  • Python (required)
  • SQL (required)
  • Git (required)
  • Developing solutions using Docker (required)
  • Experience with API Gateway (like APIGEE, Kong etc) is highly desirable
  • AWS Cloud experience is highly desirable
  • Data modeling for data warehousing (nice to have)
  • Developing microservices (nice to have)

What We Offer You:

  • Great Company Culture. Ranked as one of the most creative and innovative places to work, creativity, innovation, efficiency, diversity and philanthropy are among the core tenets of our organization and are integral drivers of our continued success. 
  • Growth: As a global entertainment company, we pride ourselves on creating environments where employees are encouraged to be themselves, to be inquisitive and collaborative and to grow within and around the company.
  • Work Hard, Play Hard. Our employees bond, blow-off steam, and flex some creative muscles – through corporate boot camp classes, company parties, game release events, monthly socials, and team challenges.
  • Benefits. Medical (HSA & FSA), dental, vision, 401(k) with company match, employee stock purchase plan, commuter benefits, in-house wellness program, broad learning & development opportunities, a charitable giving platform with company match and more!
  • Perks. Fitness allowance, employee discount programs, free games & events, stocked pantries and the ability to earn up to $500+ per year for taking care of yourself and more!

Take-Two Interactive Software, Inc. (“T2”) is proud to be an equal opportunity employer, which means we are committed to creating and celebrating diverse thoughts, cultures, and backgrounds throughout our organization.  Employment at T2 is based on substantive ability, objective qualifications, and work ethic – not an individual’s race, creed, color, religion, sex or gender, gender identity or expression, sexual orientation, national origin or ancestry, alienage or citizenship status, physical or mental disability, pregnancy, age, genetic information, veteran status, marital status, status as a victim of domestic violence or sex offenses, reproductive health decision, or any other characteristics protected by applicable law.

Tags: Agile APIs AWS Big Data Computer Science Data pipelines Data Warehousing Docker ELT Engineering ETL Git Machine Learning Microservices Pipelines Python SDLC SQL Statistics Testing

Perks/benefits: 401(k) matching Career development Fitness / gym Flex hours Health care Startup environment Team events Wellness

Region: North America
Country: United States
Job stats:  5  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.