Senior Data Engineer

Little Rock, Arkansas, United States

Applications have closed

ABOUT LIVERAMP

LiveRamp is the trusted platform that makes data accessible and meaningful. Our services power people-based customer experiences that improve the relevance of marketing and allow consumers to better connect with the brands and products they love. We thrive on solving the toughest technical and customer challenges, and we're always looking for smart, compassionate people to help us blaze a trail. 

Mission: LiveRamp makes it safe and easy for businesses to use data effectively.

ABOUT THIS JOB

LiveRamp is looking for a senior data engineer to help build our LiveRamp Safe Haven product, one of the fastest growing products within the company. 

Five years from now we will be known for creating the category of “Safe Havens” that enables controlled access to unique data assets necessary to understand the end-to-end customer journey, close the loop on measurement, and create superior customer experiences. 

The LiveRamp Safe Haven engineering team is here to make the grant vision a reality. The team owns the end to end implementation of the systems and solutions that drives the growth of LiveRamp Safe Haven product. You will have a unique experience working in a startup-like environment within an established company.

YOU WILL:

  • Design, implement, and maintain big data solutions and infrastructure using state-of-the-art technology
  • Establish best practices and standards for managing large collections of data
  • Work closely with data analysts and data scientists, enabling them to provide business insights into our customers’ rich data sets . 
  • Improve data reliability, efficiency, and quality.

YOUR TEAM WILL:

  • Design and build a global data solution that delights our key customers. 
  • Extend our platform capabilities to address in-country requirements in the global market. 
  • Work closely with various internal teams (engineering, product, SRE, and data science) on product delivery. 

ABOUT YOU:

  • 5+ years of experience in software development or data engineering. 
  • Experience building and optimizing data pipelines, architectures and data sets (Spark, Presto, Airflow, Apache Beam, etc). 
  • Understanding of Google Cloud Platform (GCP) technologies in the big data and data warehousing space (BigQuery, Cloud Data Fusion, Dataproc, Dataflow, Data Catalog)
  • Proficient in Java.
  • Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment.
  • Type S(tartup) personality: smart, ethical, friendly, hard-working and proactive.

BONUS POINTS:

  • Experience with designing and implementing interfaces and infrastructure for large volume services and APIs.
  • Exposure to Machine Learning.

Tags: Airflow APIs Big Data BigQuery Dataflow Data pipelines Dataproc Data Warehousing Engineering GCP Google Cloud Machine Learning Pipelines Spark

Perks/benefits: Career development Startup environment

Region: North America
Country: United States
Job stats:  6  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.