Data Engineer II, Funnel Insights

Seattle, Washington, USA

Full Time logo
Apply now Apply later

Posted 1 month ago

Amazon’s mission is to be the most customer centric company in the world. The vision of Workforce Intelligence is to design the ideal workforce to meet the customer promise anywhere. This organization leads and influences global workforce strategies that enable Amazon to scale operations more efficiently while also providing a unique voice for the hourly workforce. This is accomplished through a variety of science driven initiatives, experimentations, ML driven modeling, and data engineering. Amazon’s mission is to be the most customer-centric company in the world and we are on the front lines of that mission by providing robust research, data science and analytics to fill our jobs across the globe.

As a Data Engineer for the new Funnel Insights Team (FIT) in Workforce Intelligence, you will be working next to scientists and stakeholders to design and implement scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data across various data sources, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making. You will excel in the design, creation, management, and business use of large datasets that will become the core data set for research modeling and insights. You should be an expert in the architecture of DW solutions for the Enterprise using multiple platforms and have the ability to develop and tune SQL to provide optimized solutions to the business.

Key Responsibilities Include:
- Building and migrating the complex ETL pipelines into Redshift cluster
- Optimizing the performance of business-critical queries and dealing with ETL job-related issues
- Gather and understand data requirements, work in the team to achieve high quality data ingestion and build systems that can process the data, transform the data
- Designing, implementing and supporting a platform that can provide ad-hoc access to large data-sets
- Modeling data and metadata to support ad-hoc and pre-built reporting
- Earn the trust of your customers by continuing to constantly obsess over their needs and helping them solve their problems by leveraging technology
- Manage business critical initiatives to enforce standard work and reduce waste

Basic Qualifications

· Degree in Computer Science, Engineering, Mathematics, or a related field or 3+ years industry experience
· Demonstrated strength in data modeling, ETL development, data lake design and implmenetation, and data warehousing.
· Data Warehousing Experience with Redshift, PostgreSQL, Oracle, etc.
· Query performance tuning skills
· Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc)
· A desire to work in a collaborative, intellectually curious environment.

Preferred Qualifications

· A desire to work in a collaborative, intellectually curious environment.
· Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets.
· Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
· Experience building data products incrementally and integrating and managing datasets from multiple sources
· Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EC2, etc.
· Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
· Linux/UNIX.
· Experience with AWS

Job tags: AWS Business Intelligence Data Warehousing Distributed Systems Engineering ETL Java Linux ML Oracle Python Redshift Research Ruby SQL
Job region(s): North America
Share this job: