Senior Data Engineer

Seattle, Washington, USA

Full Time Senior-level / Expert USD 76K - 150K * logo

Free shipping on millions of items. Get the best of Shopping and Entertainment with Prime. Enjoy low prices and great deals on the largest selection of everyday essentials and other products, including fashion, home, beauty, electronics, Alexa...

View all employer listings

Apply now Apply later

Job summary
AWS is hiring for a Senior Data Engineer to play a key role in building their industry leading Data Engineering and Analytics Platform. Are you passionate about Big Data and highly scalable data platforms? Do you enjoy building end to end Analytics solutions to help drive business decisions? And if you have experience in building and maintaining highly scalable, highly available and fault tolerant data warehouses and data pipelines with huge transaction volumes then we need you!

The AWS Data Transfer Data Platform provides foundational and centralized data platform for AWS Finance (supporting AWS CloudFront, AWS Direct Connect, and AWS Elemental) to identify financial insights for better understanding of our customers and costs. Our teams take on some of the hardest scalability, performance, and distributed computing challenges the world. We process billions of line items and handle artifacts through the latest in database technologies. We process big data and provide tools for customers to interactively understand the copious amounts of data we store.

We are truly leading the way to disrupt the big data industry by implementing Distributed and Big Data technologies like Elastic Map Reduce (EMR) in addition to data warehouse technologies like Redshift and Athena to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services. You will have the ability to craft and build AWS Data Transfer’s Data Lake platform and supporting systems for years to come.

You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake. You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

Key job responsibilities
  • Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
  • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies.
  • Creation and support of real-time data pipelines built on AWS technologies including EMR, Glue, Kinesis, Redshift/Spectrum and Athena
  • Continual research of the latest big data, elasticsearch and visualization technologies to provide new capabilities and increase efficiency
  • Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
  • Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning
  • Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers

Basic Qualifications

  • 5+ years of experience as a Data Engineer or in a similar role
  • Experience with data modeling, data warehousing, and building ETL pipelines
  • Experience in SQL

  • Bachelor's Degree in Computer Science or a related technical field, and 5+ years of relevant employment experience
  • 5+ years of work experience with ETL, Data Modeling, and Data Architecture
  • Expert-level skills in writing and optimizing SQL
  • Experience with Big Data technologies such as Hadoop/Spark
  • Proficiency in one of the scripting languages - Python, Ruby, or similar
  • Experience operating very large data warehouses or data lakes

Preferred Qualifications

  • Master's Degree in Computer Science or related field
  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies
  • Experience with building data pipelines and applications to stream and process datasets at low latencies
  • Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data
  • Sound knowledge of distributed systems and data architecture (lambda) - design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures
  • Knowledge of Engineering and Operational Excellence using standard methodologies

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit

* Salary range is an estimate based on our salary survey at
Job perks/benefits: Career development
Job region: North America
Job country: United States
Job stats:  1  0  0
  • Share this job via
  • or

Other jobs like this

Explore more AI/ML/Data Science career opportunities

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.