Data Engineer - AWS Data Platform, AWS Data Platform

Phoenix, Arizona, USA

Applications have closed

Amazon.com

Free shipping on millions of items. Get the best of Shopping and Entertainment with Prime. Enjoy low prices and great deals on the largest selection of everyday essentials and other products, including fashion, home, beauty, electronics, Alexa...

View company page

Job summary

Amazon Web Services is seeking an extraordinary Data Engineer to join the AWS Data Lake team.


Our teams take on some of the hardest scalability, performance, and distributed computing challenges the world. We process trillions of events per month using stream processing techniques (Kinesis), process billions of line items via map reduce (EMR) and handle artifacts through the latest in database technologies (DynamoDB and Aurora). We process big data and provide tools for customers to interactively understand their bills. We also provide the analytics that let customers handle billions of dollars of IT usage and spending. Because we sit at the nexus of all AWS services and interact directly with end-customers, we also work closely across all AWS teams to ensure that we offer a great customer experience.


The AWS Data Platform team's vision is to help customers handle the full life cycle of data at all levels of granularity, simplify data collection, integration, and aggregation of AWS data assets, and provide services (compute, storage, security) to access datasets at scale. We collect and process billions of usage and billing transactions every single day into actionable information in the Data Lake and make it available to our internal service owners to analyze their business and service our external customers.

We are truly leading the way to disrupt the data warehouse industry. We are accomplishing this vision by bringing to bear Big Data technologies like Elastic Map Reduce (EMR) in addition to data warehouse technologies like Redshift to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services. You will have the ability to craft and build AWS' data lake platform and supporting systems for years to come.

You should have expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake. You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

Basic Qualifications


This position requires a Bachelor's Degree in Computer Science or a related technical field, and 2+ years of meaningful employment experience.
  • 2+ years of work experience with ETL, Data Modeling, and Data Architecture.
  • Good knowledge of writing and optimizing SQL.
  • Experience with Big Data technologies such as Hive/Spark.
  • Proficiency in one of the scripting languages - python, ruby, java or similar.
  • Experience operating very large data warehouses or data lakes.
  • Proven interpersonal skills and standout colleague.
  • A real passion for technology. We are looking for someone who is keen to demonstrate their existing skills while trying new approaches.

Preferred Qualifications

  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
  • Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
  • Knowledge of Engineering and Operational Excellence using standard methodologies.



Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: AWS Big Data Computer Science Data pipelines Distributed Systems DynamoDB Engineering ETL Kinesis Lambda Map Reduce MPP Pipelines Python Redshift Ruby Security Spark SQL

Perks/benefits: Team events

Region: North America
Country: United States
Job stats:  1  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.