Data Engineer

US, CA, Virtual Location - California

Applications have closed logo

Posted 7 months ago

Finance Technology team at Amazon is looking for a Data Engineer to play a key role in building the next generation Financial data warehouse. The ideal candidate will be passionate about building large, scalable and fast distributed systems on the AWS stack. Our new team member will want to be part of a team that has accepted the goal to democratize access to data and enabling data driven innovations for Finance users at Amazon. We are building one of the coolest brand new Big Data solutions at Amazon that will enable rapid self-service data reconciliations at Amazon.

Amazon is seeking an engineer with a strong background in Big Data technologies (Redshift, EMR, EDX, S3 etc.) with interest in data mining and ability to sieve emerging patterns and trends from large amount of data. Data Engineers should have strong experience with standard data warehousing components (e.g. ETL, Reporting, and Data Modeling). The ideal candidate will have extensive experience in dimensional modeling, excellent problem solving ability dealing with huge volumes of data and a short learning curve. Excellent written and verbal communication skills are required as the candidate will work closely with Finance customers and our leadership.

· Design, implement, and support a platform providing secured access to large datasets.
· Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions.
· Model data and metadata to support ad-hoc and pre-built reporting.
· Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
· Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
· Tune application and query performance using profiling tools and SQL.
· Analyze and solve problems at their root, stepping back to understand the broader context.
· Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use.
· Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS.
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets.
· Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment.
Along with Amazon scale problems to solve, we provide you with a chance to work the industry's most talented data engineering minds. Click Apply for an opportunity to create history while having fun.

Basic Qualifications

· 1+ years of experience as a Data Engineer or in a similar role
· Experience with data modeling, data warehousing, and building ETL pipelines
· Experience in SQL
Bachelors in Computer Science, Engineering, Statistics, Mathematics or related field
· Intermediate to Expert experience in data engineering or Business Intelligence space
· Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using traditional or map reduce batch mechanism.
· Strong data modelling skills with solid knowledge of various industry standards such as dimensional modelling, star schemas etc
· Extremely proficient in writing SQL working with large data volumes
· Experience designing and operating very large Data Warehouses
· Experience with scripting (e.g., Python, UNIX Shell scripting, Perl, or Ruby).
· Experience or willingness to learn working on the AWS stack.
· Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers
· Curious, self-motivated & a self-starter with a ‘can do attitude’. Comfortable working in fast paced dynamic environment.

Preferred Qualifications

· Be ready to learn and train on radical new tools on the AWS stack.
· Working knowledge of PL/SQL working with large data sets.
· Experience working with Oracle Hyperion or Oracle Data Integrator (ODI)
· Ideally have experience with AWS technologies including Redshift, RDS, S3, EMR, DynamoDB, Hive, Spark etc.
· Experience with dimensional modelling skills.
· Knowledge of a programming or scripting language (R, Python, Ruby, or JavaScript)..
· Must have excellent knowledge of Advanced SQL working with large data sets.
· Must have excellent dimensional modelling skills.
· Good to have experience with reporting tools like Tableau, OBIEE or other BI packages.

Job tags: AWS Big Data Business Intelligence Data Mining Data Warehousing Distributed Systems Engineering ETL Finance JavaScript Map Reduce Oracle Perl Python R Redshift Ruby Spark SQL Tableau
Job region(s): North America Remote/Anywhere