Data & Cloud | Data Engineer | Senior Consultant/Manager (Open to all locations in AUS)

Sydney, Australia

Applications have closed

KPMG Australia

KPMG is a global network of professional firms providing Audit, Tax and Advisory services.

View company page

Company Description

KPMG Data & Cloud is a team with a passion for helping our clients solve their business challenges. We achieve this through business solutions that unlock insights and value from data. Leveraging the “One KPMG” approach, our team works collaboratively with the wider KPMG service lines to provide our clients with industry-specific, tailored proposals that result in highly differentiated business transformation outcomes.

We re-imagine and re-invent organizations to become world class enterprises using advanced technologies, data and human insights. We help organizations to embrace Data Strategy and Governance, design and implement Data Management programmes, modernise Data and Cloud Platforms, and apply AI and Intelligent Automation to solve challenges.

Job Description

The Data Engineer is the designer, builder and manager of the information or data management pipelines, preparing data for analytical or operational use. You have an aptitude for translating business problems into data & infrastructure/resource requirements and solutions. You will design, construct, test and maintain data pipelines to pull together information from different source systems, pipelines which integrate, consolidate, cleanse and monitor the data. You will actively ensure the stability and scalability of our clients’ systems and data platforms. You will strive to bring the best of DevOps practices to the world of data by embracing the emerging practice of DataOps.  You will work proactively to:

  • Drive a technical roadmap for the team, covering non-functional requirements such as scalability, reliability and observability
  • Assess new and existing data sources on their applicability to address the business issue and translate the outcomes of the analytical solutions we design in the context of business impacts and benefits.
  • Design, construct, install, test and maintain highly scalable, resilient, recoverable data management systems by bringing a software engineering mindset and applying DataOps principles
  • Automate the build deployment of data pipelines using and DevOps and CI/CD patterns, including defining what tests are required and can be automated at each stage of the lifecycle of a data pipeline.
  • Understand, explain and evangelize buzz words such as serverless, cloud native and PaaS and how they impact the design of Data Pipelines
  • Be comfortable with code or tool based data pipelines and understand the pro and cons of each.
  • Work closely with Data Scientists to extract and manipulate data from a variety of sources and subsequently cleanse, standardize, scale, bin, categorize, tokenize, stem and transform it in order to get the data into a state suitable for further analysis. 
  • Design, develop and implement the automated approach for productionising model scoring and the closed loop feedback paths required to support test and learn.
  • Select and configure analytics toolsets considering the clients’ business issue and analytic maturity.

In addition to your focus on client engagements, you will contribute to the definition and enhancement of data engineering and DataOps disciplines within the practice.  

How are you extraordinary?

  • A proven ability to undertake the responsibilities and requirements of the role, as listed above.
  • Excellent interpersonal, oral and written communication skills, with a knack for distilling complex and/or technical problems in environments that are often ambiguous. 
  • An understanding of tailoring the solutions we build for the owners of the solutions (our clients)
  • A proven ability to develop and manage enduring client relationships, engendering a sense of trust and respect.
  • Demonstrable industry knowledge; understanding the way your primary industry functions and how data can be collected, analyzed and utilized; maintaining flexibility in the face of cloud and data industry developments. 
  • A disciplined approach to problem solving and an ability to critically assess a range of information to differentiate true business needs as opposed to user requests.
  • Experience with a range of technical skills that could include:
  • Knowledge of architecting and engineering cloud-based data solutions with the following products
  • Snowflake
  • Cloud platforms with a focus on Paas or Serverless:
  • AWS: Redshift/RDS, S3, EC2, Lambda, step, EMR, Glue, DynamoDB, Athena, Kinesis, Cloudwatch, SQS, SNS, fargate
  • OR Azure: Blob storage, Synapse, Data Factory, Functions, CosmosDB, Databricks, log analytics, Event hub, Azure Kubernetes Service
  • Big Data technologies such as Hadoop, Spark Streaming, Flink, Hudi, Storm, NiFi, HBase, Hive, Zepplin, Kafka, Ranger, Ambari.
  • Programming languages such as Java, Node, C#, Go, Python, Scala, SAS, R.
  • ETL tool experience
  • Experience with DevOps principles and tools, including:
  • Agile enterprise development environments, CICD implementation, continuous testing, Cloud resource management (CloudFormation, terraform, ARM templates, pulumi etc..), automation of environment deployment.
  • Continuous Integration/Delivery tools such as Jenkins, AWS code, Azure DevOps or other similar industry tools
  • Version control and development processes for data, low-level hardware and software configurations, and the code and configuration specific to each tool in the chain.
  • A proven ability to:
  • Build resilient, tested, data pipelines with statistical data quality monitoring embedded (DataOps)
  • Work with an existing lifecycle management framework to collect metadata, follow coding standards, use version control, complete documentation and write and execute unit tests. 
  • Appropriately communicate discovered information to consumers, clearly using visual variables shape, colour, hue, orientation, etc.
  • Experience with SQL-based technologies (e.g. PostgreSQL and MySQL) and NoSQL technologies (e.g. Cassandra and MongoDB)
  • Experience of Data lake and Data warehousing solutions and architectures and low-level design principles
  • Data modelling tools (e.g. ERWin, Enterprise Architect and Visio)
  • High-level understanding of statistical analysis and modelling, predictive analytics, text analytics and other machine learning applications and how to extract insight from data.
  • A sound understanding of digital and cognitive technologies and analytics, information management and business process-based solutions.
  • A disciplined approach to problem solving and an ability to critically assess a range of information to differentiate true business needs as opposed to user requests.

An ability to work within a multidisciplinary team to seek and provide requirements to team members responsible for different pipeline areas.

Additional Information

Please Note: At KPMG, we are enjoying an end-of-year break and will be returning to the office on 9th January, 2023. Applications will be viewed then and successful candidates contacted accordingly. We appreciate your patience and understanding!

KPMG is one of the most trusted and respected global professional services firms. We partner with clients across an array of industries to solve complex challenges, steer change, drive disruption, and enable growth. 

Our people are what make KPMG the thriving workplace that it is and what sets us apart is that we know great minds think differently. Collaborate with a team of passionate, highly skilled professionals who’ve got your back. You’ll build relationships with unique and diverse colleagues who will provide you with the support you need to be your best and produce meaningful and impactful work in an inclusive, equitable culture.

At KPMG, you’ll take control over how you work. We’re embracing a new way of working in many ways, from offering flexible hours and locations to generous paid parental leave and career breaks. Our people enjoy a variety of exciting perks, including retail discounts, health and wellbeing initiatives, learning and growth opportunities, salary packaging options and more.

Diverse candidates have diverse needs. During your recruitment journey, information will be provided about adjustment requests. If you require additional support before submitting your application, please contact Talent Support Team.

At KPMG every career is different, and we look forward to seeing how you grow with us.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture Athena AWS Azure Big Data Cassandra CI/CD Databricks Data management DataOps Data pipelines Data quality Data strategy Data Warehousing DevOps DynamoDB EC2 Engineering ETL Flink Hadoop HBase Kafka Kinesis Kubernetes Lambda Machine Learning MongoDB MySQL NoSQL Pipelines PostgreSQL Python R Redshift SAS Scala Snowflake Spark SQL Statistics STEM Streaming Terraform Testing

Perks/benefits: Career development Flex hours Health care Parental leave Startup environment

Region: Asia/Pacific
Country: Australia
Job stats:  4  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.