Data Engineer, Finance Technology

US, CA, Virtual Location - California

Full Time logo
Apply now Apply later

Posted 1 month ago

Finance Technology team is looking for a talented and passionate Data Engineer with strong technical and business background. Our team operates platforms that are among the largest in the world by volume and complexity.

A successful candidate will be able to define how we publish the financial data sets from these systems to hundreds of teams which enable varied critical business functions for Amazon. In this role, you should excel in the design, creation, management, and business use of extremely large datasets. You will be responsible for designing and implementing scalable processes to publish data and build solutions to reconcile data for integrity and accuracy of financial data sets. You should have broad understanding of RDBMS, industry standard data replication solutions, NoSQL technologies , ETL , Big Data, Hadoop, Data Security, Data Integration, Data Warehousing, Data Governance and Data Lakes

Basic Qualifications

· 3+ years of experience as a Data Engineer or in a similar role
· Experience with data modeling, data warehousing, and building ETL pipelines
· Experience in SQL
· Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline
· 4+ years of industry experience in data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
· Experience using big data technologies (Hadoop, Hive, HBase, Spark etc.)
· Demonstrated strength in data modeling, ETL development, and data warehousing
· Knowledge of data management fundamentals and data storage principles
· Knowledge of distributed systems as it pertains to data storage and computing
· Experience with automation using either Shell scripting, Python or other similar languages
· Excellent knowledge of Advanced SQL working with large data sets.

Preferred Qualifications

· Experience with AWS technologies including Redshift, RDS, S3, EMR, EML or similar solutions build around Hive/Spark etc.
· Experience with working data replication technologies
· Proficient in one of Programming languages (e.g., Python, Ruby, Shell Scripting, Java)
· Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations

Job tags: AWS Big Data Business Intelligence Data Warehousing Distributed Systems Engineering ETL Finance Hadoop Java NoSQL Python Redshift Ruby Security Spark SQL