Data Engineer (Melbourne & Sydney)

Malvern East, Australia

Applications have closed

Company Description

We are a customer-obsessed, “one stop shop” for all things data-related.  We are deliberately narrow in our focus areas but we go deep in each of them.  We provide solutions, services and training in the data strategy, data governance, data security, data management, data classification, data wrangling, data lake, data analytics, data visualisation, artificial intelligence, machine learning and IoT areas.  Our solutions and services help transform customer organisations by teaching them how to turn their data into insight

We build on AWS, using the latest AWS services, best practices and reference architectures.  We deliberately focus on large public sector customers that are data-heavy but insight-light – federal & state government departments, universities, public sector agencies, public health organisations, and public utilities. 

We are an AWS Advanced Consulting Partner and an AWS Public Sector Partner, working towards our Data & Analytics and Government competencies.  We are also a registered technology supplier to the Federal, New South Wales, Victorian and Western Australian governments and we have a Master Services Agreement in place with AWS Professional Services.

Job Description

As an AWS Data Engineer, you will play a key role in working with customers to establish cloud infrastructure, build data pipelines and manage their data effectively.  You will have considerable experience working on data-related solutions using AWS and will be comfortable working with technical resources  including data analysts, data engineers and data scientists, to enable customers to generate insight from their data. 

You will relish the opportunity to innovate, evolve and test existing assumptions and design bold solutions.  Your experience with large-scale data challenges and ability to create reliable, scalable cloud-based solutions, will see you thriving in this role!

Expectations

  • Enjoy honing your technical skills and developing new ones, so you can make strong contributions to deep architecture discussions.
  • Regularly take part in further education and training to help you develop high-quality and highly-performant solutions.
  • Have a demonstrated ability to think strategically about solutions to business, product, and technical challenges.
  • Possess an innate awareness of technology trends and how they impact the way businesses consume IT.
  • Have a reasonable understanding of IT security and governance including security standards, access policies, and data classification schema.
  • Love what you do and instinctively know how to make work fun
  • Be dynamic, creative, and willing to take on challenges that have the potential to make a big impact.
  • Relish a ground floor opportunity to work in a small, agile, startup environment and play a role in shaping its business and technical goals.
  • A Technical degree required. (Examples include: Computer Science, Data Science, Information Technology and similar)

 

Qualifications

  • 1+ years of AWS hands on technical experience
  • 1+ years of Software development tools and methodologies
  • 1+ years of Data engineering-related experience
  • Experience producing high-quality documentation and written communications
  • 1+ years of experience working with DevOps tools such as Bitbucket, Bamboo, Jenkins or Ansible
  • 1+ years of experience in improving CI/CD tools intregrations and operations.  
  • 1+ AWS certifications that include either Developer Associate, Solution Architect Associate, SysOps Administrator Associate, Solution Architect Professional, Data Analytics Speciality and/or Machine Learning Speciality.
  • Strong understanding of Infrastructure as Code, especially AWS Cloudformation.
  • Programming skills in one or more of the following languages: Python, Java, TypeScript, SCALA & Spark.
  • Experience in building real-time or batch ingestion and transformation pipelines.
  • Familiarity with data storage formats types such as JSON, AVRO, Parquet, ORC.
  • An understanding of fundamental data concepts including databases, data schema, data classification, data security, data governance, data analysis, data transformation, data engineering and data science.
  • Experience and understanding of container technology, especially Docker. (Desirable)
  • An understanding of fundamental networking concepts. (Desirable)

Additional Information

  • We’re passionate about the technical community we are a part of, and believe in contributing to open source  and AWS initiatives
  • We care about climate change and are always looking at ways to reduce our carbon footprint
  • We support and contribute to a different cause each quarter
  • We’re committed to developing your career with customised individual growth and development plans
  • We’re committed to ensuring all our staff feel welcome, included and respected every day they work for The Data Foundry

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Ansible Architecture Avro AWS Bitbucket CI/CD Classification CloudFormation Computer Science Consulting Data analysis Data Analytics Data governance Data management Data pipelines Data strategy DevOps Docker Engineering Java JSON Machine Learning Open Source Parquet Pipelines Python Scala Security Spark Teaching TypeScript

Perks/benefits: Career development Startup environment

Region: Asia/Pacific
Country: Australia
Job stats:  13  4  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.