Lead Data Engineer (Azure/AWS)

Telstra ICC Bengaluru

Telstra

Join Australia's largest mobile network, view our plans for NBN broadband internet, mobile phones, 5G & on demand streaming services.

View company page

Employment Type

Permanent

Closing Date

29 June 2024 11:59pm

Job Title

Lead Data Engineer (Azure/AWS)

Job Summary

As Data Solution Engineering - Senior Specialist, you lead aspects of the development and delivery of best practice solutions that empower teams across Telstra to leverage our digital assets. As a trusted partner and advisor, you collaborate with Data Sourcing Engineers and Data Scientists to source and translate quality data into operational solutions. You understand the business problems, ensure architecture aligns with, and systems meet the business requirements in the delivery and integration of complex data platforms

Job Description

About Telstra

We're Australia's leading telecommunications and technology company. And with a global presence in more than 22 countries, we have a strong global footprint. Our purpose is to build a connected future so everyone can thrive. We're all about providing the best experience and delivering the best tech on the best network. This includes making Telstra the place you want to work.

We offer a full range of services and compete in all telecommunications markets throughout Australia and are the most well-known brand in technology and communications industry.

We have operations in more than 20 countries, including in India. In India we are a licensed Telecom Service provider (TSP) and have extended our global networks into India with offices in Bangalore, Mumbai and Delhi. We’ve opened an Innovation and Capability Centre (ICC) in Bangalore and have a presence in Pune and Hyderabad. In India, we’ve set out to build a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’re combining innovation, automation and technology to solve the world’s biggest technological challenges in areas such as Internet of Things (IoT), 5G, Artificial Intelligence (AI), Machine Learning, and more.

Here’s what you can expect from us

  • Hybrid way of work, which will allow us to enjoy the benefits of both remote and in-office collaboration. This means that we will have more flexibility, autonomy, and diversity in our work environment, while also maintaining the connection, culture, and creativity that we value as a team. We believe that this is the best way to support our employees' well-being, productivity, and innovation in the post-pandemic world.
  • Flexible working. Choose when and how you work so you can be at your best.
  • Maternity Leave. Up to 26 weeks provided to the birth mother with benefits for all child births.
  • (Women in Tech) Initiative to promote women in tech. We believe that diversity and inclusion are essential for innovation and growth, and we want to support and empower more women to pursue careers in STEM fields.
  • Pay for performance. We recognize outstanding contributions through our competitive incentive programs.
  • Insurance benefits. Receive generous insurance benefits such as medical, accidental and life insurance.
  • Unlimited learning. Level up your skills with access to 17,000 learning programs. Learn ‘on the job’ and achieve university credits towards degrees and master’s programs.
  • Global presence. With a global presence across 22 countries, there are many opportunities to work where we do business.
  • Function overview
  • Make a difference as part of Product and Technology, your mission will be simple: capture market value at scale by building products our customers love that are simple to experience, seamless to deliver, and are profitable to the core.

What you'll do

Being part of Data Engineering means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy. With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme. 

We are seeking a highly skilled Data Engineer with expertise in Spark, Python, Scala. The successful candidate will be responsible for will be responsible for designing, developing, and maintaining data pipelines using Spark, Python, Scala, and related technologies. The Data Engineer will also be responsible for ensuring data quality, data security, and optimal performance of the data pipelines. Any new engineer would be mostly into developing reusable data processing and storage frameworks that can be used across data platform. 

Hadoop Data Engineering – Hive, Oozie, Yarn / MapReduce, Spark (must) with SCALA (Pref) or Python, Strong SQL.

Job Location – Bangalore

Key Responsibilities

Data Engineer Senior Specialist role is to coordinate, and execute all activities related to the requirements interpretation, design, and implementation of Data Analytics applications. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical systems with a focus on quality application development and delivery.

This role is key to the success of the Data Engineering capability at Telstra and will be responsible and accountable for the following:

  • Lead the design, development, and maintenance of data pipelines using Spark, Python, Scala, and related technologies.
  • Work with high volume data and ensure data quality and accuracy.
  • Implement data security and privacy best practices to protect sensitive data.
  • Collaborate with data scientists and business stakeholders to understand data needs and requirements.
  • Develop and maintain documentation on data pipeline architecture, data models, and data workflows.
  • Mentor and provide technical guidance to junior team members.
  • Monitor and troubleshoot data pipelines to ensure they are performing optimally.
  • Stay up to date with the latest developments in Azure, AWS, Spark, Python, Scala, and related technologies and apply them to solve business problems.
  • Optimize data pipelines for cost and performance.
  • Automate data processing tasks and workflows to reduce manual intervention.
  • Ability to work in Agile Feature teams.
  • Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation.
  • Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production.
  • Technical Skills
  • Hands-on experience in the following on Spark Core, Spark SQL, SQL/Hive/Impala.
  • Data engineer with expertise of working on Azure Cloud using Databricks, Kinesis/ Azure Event hub Flume/Kafka/Spark streaming, Azure Data Factory.
  • Exposure on Hadoop Ecosystem (HDP/Cloudera/MapR/EMR etc)
  • Experience of working on File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)
  • Experience with high volume data processing and data streaming technologies
  • Experience of using Orchestration tools like Control-m, Azure Data Factory,Airflow,Luigi to schedule jobs.
  • Demonstrated experience leading data engineering projects and mentoring junior team members.
  • Strong experience in data modelling, schema design, and ETL development using SQL and related technologies.
  • Familiarity with data security and privacy best practices
  • Good exposure on TDD
  • Exposure on using CI tools like Git, Bitbucket, Github, Gitlab, Azure DevOps
  • Exposure on using CD tools like Jenkins, Bamboo, Azure DevOps
  • Exposure on Observability tools like Azure Monitor, Graphana etc
  • Prior experience in building or working in team building reusable frameworks
  • Good understanding of Data Architecture and design principles. (Delta/Kappa/Lambda architecture)
  • Exposure to Code Quality - Static and Dynamic code scans
  • Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms.
  • Should be able to provide scalable and robust solution architecture depending on the business needs.
  • Propose best practices/standards!
  • Programming & Databases - Java /Python/Scala/ SQL Procedure. Multi tenanted databases / Spark

Who we're looking for.

  • 9+ Years of experience on Spark Core, Spark SQL, SQL/Hive/Impala.
  • Strong Coding skills in Pyspark, Python and SQL.
  • Exposure on Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)  
  • Experience of working on File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)  
  • Experience with high volume data processing and data streaming technologies.
  • Experience and knowledgeable on the following: Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc, Presto/Athena  
  • Experience of using Orchestration tools like Control-m 
  • Strong experience in data modelling, schema design, and ETL development using SQL and related technologies. 
  • Familiarity with data security and privacy best practices 
  • Good exposure on TDD 
  • Exposure on using CI tools like Git, Bitbucket, Github, Gitlab, Azure DevOps 
  • Exposure on using CD tools like Jenkins, Bamboo, Azure DevOps 
  • Cloud exposure (Hadoop)  
  • Exposure of working on Power BI  
  • Prior experience in building or working in team building reusable frameworks 
  • Good understanding of Data Architecture and design principles. (Delta/Kappa/Lambda architecture)  
  • Exposure to Code Quality - Static and Dynamic code scans  
  • Good knowledge of NoSQL Databases/ HBase/ MongoDB / Cassandra / Cosmos DB 
  • Good knowledge of GraphDB(Neo4J) 
  • Experience with enterprise data management, Datawarehouse, data modelling, Business Intelligence, data integration. 
  • Expertise in SQL and stored procedure. 
  • Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms. 
  • Experience in working on Azure SQL Data warehouse (Synapse) and Azure analysis service or Redshift or Synapse 
  • Propose best practices/standards 
  • Translate, load and present disparate datasets in multiple formats/sources including JSON, XML etc. 
  • Should be able to provided scalable and robust solution architecture depending on the business needs.  
  • Should be able to compare tools and technologies and recommend a tool or technology  
  • Should be well versed with overall IT landscape, technologies and should be able to analyse how different technologies integrates with each other .

Call to action

If you're excited about the opportunity to be part of a team, committed to delivering amazing experiences to our customers – this could be the role for you!

___________________________

When you join our team, you become part of a welcoming and inclusive community where everyone is respected, valued and celebrated. We actively seek individuals from various backgrounds, ethnicities, genders and abilities because we know that diversity not only strengthens our team but also enriches our work. We have zero tolerance for harassment of any kind, and we prioritise creating a workplace culture where everyone is safe and can thrive. We work flexibly at Telstra. Talk to us about what flexibility means to you. When you apply, you can share your pronouns and / or any reasonable adjustments needed to take part equitably during the recruitment process.  

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow Architecture Athena Avro AWS Azure Bitbucket Business Intelligence Cassandra Cosmos DB Data Analytics Databricks Data management Data pipelines Data quality Data warehouse DevOps Engineering ETL Git GitHub GitLab Hadoop HBase Java JSON Kafka Kinesis Lambda Machine Learning MongoDB Neo4j NoSQL Oozie Parquet Pipelines Power BI Privacy PySpark Python RDBMS Redshift Scala Security Spark SQL STEM Streaming TDD XML

Perks/benefits: Career development Competitive pay Flex hours Medical leave Startup environment Team events

Region: Asia/Pacific
Country: India
Job stats:  1  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.