Data Engineer, Amazon Flex

Toronto, Ontario, CAN

Full Time logo
Apply now Apply later

Posted 2 weeks ago

Challenge yourself, and help build Amazon Last Mile Technology software that plans, schedules, and executes on the speedy one-hour delivery promise.

We are building some of the most innovative big data solutions to handle large volume of data on tight SLAs. This requires high quality data to be constantly delivered to our customers. It is our team's responsibility to maintain/curate/extend many critical datasets and queries.

Our Data Engineer needs to be able to gather and understand data requirements, present it to software engineers, and work in the team to achieve high quality data ingestion goals. You need a passion for complex problems, and enjoy the challenge of operating complex and mission critical systems under extreme loads. Do you think you are up to the challenge? Would you like to learn more and stretch your skills and career?
In this role, you will be a technical expert with significant scope and impact. You will work closely with a group of Software Development Engineers, Product Managers, Data Scientists, and Business Intelligence Engineers to create the data infrastructure and pipelines necessary to drive our team’s initiatives.

Successful candidates should come from a strong data engineering background. You need to have experience with structured data, and being able to analyze/transform the data using various tools. Although SQL is a strong requirement, being flexible enough to work in a scripting environment is a must. Often, the pace of innovation and change implies a need to move to new data sources, and our Data Engineers get to participate in deep diving business data in order to understand/measure sources of disparity. Your analytical skills and knowledge of schema metadata will be essential.

Basic Qualifications

· Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical discipline.
· 4+ years of industry experience in Software Development, Data Engineering, Business Intelligence, Data Science, or related field with a track record of manipulating, processing, and extracting value from large datasets.
· Hands-on experience and advanced knowledge of SQL.
· Experience in Data Modeling, ETL Development, and Data Warehousing.
· Experience using business intelligence reporting tools (Power BI, Tableau, Cognos, etc.).
· Experience using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.).
· Knowledge of Data Management fundamentals and Data Storage principles.
· Experience coding and automating processes using Python or R.
· Strong customer focus, ownership, urgency, and drive.
· Excellent communication skills and the ability to work well in a team.
· Effective analytical, troubleshooting, and problem-solving skills.

Preferred Qualifications

· Experience working with AWS big data technologies (Redshift, S3, EMR).
· Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy.
· Experience providing technical leadership and educating other engineers for best practices on data engineering.
· Background in Big Data, non-relational databases, Machine Learning and Data Mining is a plus.
· Masters in computer science, mathematics, statistics, economics, or other quantitative fields.

Job tags: AWS Big Data Business Intelligence Data Mining Data Warehousing Economics Engineering ETL Hadoop Machine Learning Power BI Python R Redshift Spark SQL Tableau
Job region(s): North America
Share this job: