SRE-Big Data

Bengaluru, IN

ANZ Banking Group Limited

ANZ offers a range of personal banking services such as internet banking, bank accounts, credit cards, home loans, personal loans, travel and international, investment and insurance. Learn about easy and secure ways to manage your money.

View company page

About the role

As an SRE, you will work as part of a team with a mission to ensure the health of the EBD (Enterprise Big Data) platform, thus making the right data accessible to our people and customers through tools and channels to drive faster and more informed decision making, removing friction, and ultimately delivering a world class customer experience.

What will your day look like

  • Design, develop and maintain a secure, robust and scalable Data Ecosystem on the Enterprise Big Data (EBD) Platform.

  • Develop and enhance batch ingestion frameworks and streaming solutions needed to build a Data Lake platform on Hadoop.

  • Design and develop complex orchestration logic including automated alert management as part of building robust data pipelines with utmost focus towards resiliency and reconciliation.

  • Design and develop industry standard observability using tools such as splunk/Grafana/Prometheus etc .

  • Proven record of automation , specifically on data consistency, data quality and reconciliation, including testing.

  • Develop robust CI/CD pipelines with secops integration using tools such as Bamboo or Github actions.

  • Ability to solve ambiguous and complex engineering problems.

  • Work collaboratively within and across teams, Tech Areas and Domains.

  • Utilise tools and practices to build, verify and deploy solutions in the most efficient ways, we place a high emphasis on Software Fundamentals.

What will you bring?

  • 4+years of experience with relevant Big Data Engineering 

  • Programming experience in one or more of Java, Scala or Python

  • Expertise in at least one commercial distribution of Hadoop

  • Experience with most of Big Data Stack including Spark, Hive, YARN, Kafka, Hbase, Oozie, Control M etc.

  • Good experience in automating manual activities, data quality checks and resiliency.

  • Good experience in log analysis and management in distributed Hadoop platform along with dashboarding and automated monitoring of applications health.

  • Strong Linux/Unix skills

  • Ideally you have experience working as a SRE/Devops engineer. 

  • Familiarity with Docker

  • Experience with secure Hadoop cluster using Kerberos

  • Experience with CI/CD tools (Jenkins, GitHub Actions or Bamboo).

  • Experience in Splunk or any other enterprise observability tool and dashboarding

So, why join us?

ANZ is a place where big things happen as we work together to provide banking and financial services across more than 30 markets. With more than 7,500 people, our Bengaluru team is the bank’s largest technology, data & operations centre outside Australia. In operation for over 33 years, the centre is critical in delivering the bank’s strategy and making an impact for our millions of customers around the world. Our Bengaluru team not only drives the transformation initiatives of the bank, it also drives a culture that makes ANZ a great place to be. We’re proud that people feel they can be themselves at ANZ and 90% of our people feel they belong.

We know our people need different things to be great in their role, so we offer a range of flexible working options, including hybrid work (where the role allows it). Our people also enjoy a range of benefits including access to health and wellbeing services.

We want to continue building a diverse workplace and welcome applications from everyone. Please talk to us about any adjustments you may require to our recruitment process or the role itself. If you are a candidate with a disability or access requirements, let us know how we can provide you with additional support.

To find out more about working at ANZ visit https://www.anz.com/careers/. You can apply for this role by visiting ANZ Careers and searching for reference number 63816

Job Posting End Date

 23/04/2024, 11.59pm, (Melbourne Australia)

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Banking Big Data CI/CD CX Data pipelines Data quality DevOps Docker Engineering GitHub Grafana Hadoop HBase Java Kafka Linux Oozie Pipelines Python Scala Spark Splunk Streaming Testing

Perks/benefits: Flex hours Health care Team events

Region: Asia/Pacific
Country: India
Job stats:  2  1  0
Category: Big Data Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.