Big Data Engineer

Seattle, Washington, USA

Applications have closed

Amazon.com

Free shipping on millions of items. Get the best of Shopping and Entertainment with Prime. Enjoy low prices and great deals on the largest selection of everyday essentials and other products, including fashion, home, beauty, electronics, Alexa...

View company page

The AWS World Wide Revenue Operations (WWRO) Revenue technology team is responsible for publishing Daily Estimated Revenue (DER) and
monthly Amortized Sales Revenue (ASR). DER and ASR are used by downstream customers in AWS Finance, Accounting, Sales
Operations, Segmentation & Planning (S&P), Data & Analytics and Compensation to set quotas, manage sales goals, forecast,
and derive business insights. We collect and process billions of usage and billing transactions every single day and relate it to the largest data feed supported by Salesforce.com. We transform this raw data into actionable information in the Data Lake and make it available to our internal service owners to analyze their business and service our external customers.
We are truly leading the way to disrupt the big data industry. We are accomplishing this vision by bringing to bear Big Data technologies like Elastic Map Reduce (EMR) in addition to data warehouse technologies like Spectrum to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services.

You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake. You should be an authority at crafting, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.

Location: This role open to these locations: Seattle & Dallas. Relocation offered from within the US to any of these locations.


Basic Qualifications


· This position requires a Bachelor's Degree in Computer Science or a related technical field, and 5+ years of related employment experience.
· 5+ years of work experience with ETL, Data Modeling, and Data Architecture.
· Expert-level skills in writing and optimizing SQL.
· Experience with Big Data technologies such as Hive/Spark.
· Proficiency in one of the scripting languages - python, ruby, linux or similar.
· Experience operating very large data warehouses or data lakes.

Preferred Qualifications

· Master's Degree in Computer Science or related field.
· Proficiency in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream and process datasets at low latencies.
· Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
· Knowledge of Engineering and Operational Excellence using standard methodologies.
· Meets/exceeds Amazon’s leadership principles requirements for this role
· Meets/exceeds Amazon’s functional/technical depth and complexity for this role.
· Experience with AWS services including S3, Redshift, EMR and RDS


Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

Tags: AWS Big Data Computer Science Data pipelines Distributed Systems Engineering ETL Finance Lambda Linux Map Reduce MPP Pipelines Python Redshift Ruby Spark SQL

Region: North America
Country: United States
Job stats:  14  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.