Data engineer (Level 4)

Austin, Texas, USA

Full Time logo
Apply now Apply later

Posted 3 weeks ago

Work hard, have fun and make history by joining Amazon’s Global Corporate Affairs (GCA) technology team! We are seeking a Data Engineer who will be responsible for developing data architecture for data centralization and BI platforms, building data pipelines and reporting architecture to scale/support GCA business specific Data Lakes. The Data Engineer will play a vital role in development of infrastructure for data integration/ingestion projects, build data applications/programs that integrate with internal/external systems and perform POCs to evaluate and upgrade tech stack to latest AWS services in data realm. You will work with existing team members to help define GCA’s data program.

Are you curious, customer-obsessed, flexible, smart and analytical, hungry and passionate about building infrastructure/software to handle varies types of datasets? If yes, this opportunity will appeal to you. In this role you should have a penchant for digging deep, comfortable with solving ambiguous problems, passion for data, a good learner that can adapt and grow under guidance of senior members and have excellent oral and written communication skills, as well as strong analytical and problem-solving skills.

Basic Qualifications

· 1+ years of experience as a Data Engineer or in a similar role
· Experience with data modeling, data warehousing, and building ETL pipelines
· Experience in SQL
· Degree in Computer Science, Engineering, Mathematics, or a related field and 2+ years industry experience
· Demonstrated strength in data modeling, ETL development, SQL and data warehousing.
· A desire to work in a collaborative, intellectually curious environment.
· Data Warehousing Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) and/or Oracle, Redshift, MS SQL server DBs/related DBs.
· Experience with AWS Data and analytics tech stacks.
· Experience building data products incrementally and integrating and managing data sets leveraging APIs from multiple sources.
· Coding proficiency in at least one modern programming language (Python, Scala, Java, etc).

Preferred Qualifications

· Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, ETL Developer) with a track record of manipulating, processing, and extracting value from large datasets.
· Experience record in applying data engineering for enabling business analytics (Operations, Finance, HR etc).
· Experience of systems engineering to optimize integration of data service solutions with data data infrastructure.
· Strong business acumen
· Experience working large-scale data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EMR, Glue and other big data technologies
· Experience with AWS

Job tags: AWS Big Data Data pipelines Data Warehousing Engineering ETL Finance Hadoop Java Oracle Python Redshift Scala Spark SQL
Job region(s): North America
Job stats:  10  1  0
  • Share this job via
  • or

More AI/ML/Data Science position highlights