Big Data Engineer (Pyspark, Python and SQL)

Telstra ICC Bengaluru

Telstra

Join Australia's largest mobile network, view our plans for NBN broadband internet, mobile phones, 5G & on demand streaming services.

View company page

Employment Type

Permanent

Closing Date

29 Apr 2024 11:59pm

Job Title

Big Data Engineer (Pyspark, Python and SQL)

Job Summary

As a Data Engineering Analyst, you create and provide access to high quality and reliable data solutions. In collaboration with your colleagues you deliver and develop best practice data solutions and pipelines. You are known for the integrity and accuracy of data that enables quality data-driven business decisions and equip Telstra to deliver better customer and business outcomes. In a DevOps model, you will develop the data pipelines using Continuous Integration; Continuous Deployment (CICD) techniques. 

Job Description

About Telstra

We're Australia's leading telecommunications and technology company. And with a global presence in more than 22 countries, we have a strong global footprint. Our purpose is to build a connected future so everyone can thrive. We're all about providing the best experience and delivering the best tech on the best network. This includes making Telstra the place you want to work.

We offer a full range of services and compete in all telecommunications markets throughout Australia and are the most well-known brand in technology and communications industry.

We have operations in more than 20 countries, including in India. In India we are a licensed Telecom Service provider (TSP) and have extended our global networks into India with offices in Bangalore, Mumbai and Delhi. We’ve opened an Innovation and Capability Centre (ICC) in Bangalore and have a presence in Pune and Hyderabad. In India, we’ve set out to build a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’re combining innovation, automation and technology to solve the world’s biggest technological challenges in areas such as Internet of Things (IoT), 5G, Artificial Intelligence (AI), Machine Learning, and more.

Here’s what you can expect from us

  • Hybrid way of work, which will allow us to enjoy the benefits of both remote and in-office collaboration. This means that we will have more flexibility, autonomy, and diversity in our work environment, while also maintaining the connection, culture, and creativity that we value as a team. We believe that this is the best way to support our employees' well-being, productivity, and innovation in the post-pandemic world.
  • Flexible working. Choose when and how you work so you can be at your best.
  • Maternity Leave. Up to 26 weeks provided to the birth mother with benefits for all child births.
  • (Women in Tech) Initiative to promote women in tech. We believe that diversity and inclusion are essential for innovation and growth, and we want to support and empower more women to pursue careers in STEM fields.
  • Pay for performance. We recognize outstanding contributions through our competitive incentive programs.
  • Insurance benefits. Receive generous insurance benefits such as medical, accidental and life insurance.
  • Unlimited learning. Level up your skills with access to 17,000 learning programs. Learn ‘on the job’ and achieve university credits towards degrees and master’s programs.
  • Global presence. With a global presence across 22 countries, there are many opportunities to work where we do business.
  • Function overview
  • Make a difference as part of Product and Technology, your mission will be simple: capture market value at scale by building products our customers love that are simple to experience, seamless to deliver, and are profitable to the core.

Work Location: Bangalore and Hyderabad

What you'll do

Being part of Data Engineering means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy. With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme. 

We are seeking a highly skilled Data Engineer with expertise in Spark, Python, Scala. The successful candidate will be responsible for will be responsible for designing, developing, and maintaining data pipelines using Spark, Python, Scala, and related technologies. The Data Engineer will also be responsible for ensuring data quality, data security, and optimal performance of the data pipelines. Any new engineer would be mostly into developing reusable data processing and storage frameworks that can be used across data platform. Hadoop Data Engineering – Hive, Oozie, Yarn / MapReduce, Spark (must) with SCALA (Pref) or Python, Strong SQL

Key Responsibilities

  • Design, develop, and maintain data pipelines using Spark, Python, Scala, and related technologies on Hadoop/Cloudera Platform
  • Work with high volume data and ensure data quality and accuracy.
  • Implement data security and privacy best practices to protect sensitive data.
  • Develop and maintain documentation on data pipeline architecture, data models, and data workflows.
  • Monitor and troubleshoot data pipelines to ensure they are performing optimally.
  • Stay up to date with the latest developments in Azure, AWS, Spark, Python, Scala, and related technologies and apply them to solve business problems.
  • Optimize data pipelines for cost and performance.
  • Automate data processing tasks and workflows to reduce manual intervention.
  • Ability to work in Agile Feature teams.
  • Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation.
  • Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production.

Who we're looking for.

  • 4-7 Years of experience on Spark Core, Spark SQL, SQL/Hive/Impala 
  • Exposure on Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)  
  • Experience of working on File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)  
  • Experience with high volume data processing and data streaming technologies.
  • Experience and knowledgeable on the following: Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc, Presto/Athena  
  • Experience of using Orchestration tools like Control-m 
  • Strong experience in data modelling, schema design, and ETL development using SQL and related technologies. 
  • Familiarity with data security and privacy best practices 
  • Good exposure on TDD 
  • Exposure on using CI tools like Git, Bitbucket, Github, Gitlab, Azure DevOps 
  • Exposure on using CD tools like Jenkins, Bamboo, Azure DevOps 
  • Cloud exposure (Hadoop)  
  • Exposure of working on Power BI  
  • Prior experience in building or working in team building reusable frameworks 
  • Good understanding of Data Architecture and design principles. (Delta/Kappa/Lambda architecture)  
  • Exposure to Code Quality - Static and Dynamic code scans  
  • Good knowledge of NoSQL Databases/ HBase/ MongoDB / Cassandra / Cosmos DB 
  • Good knowledge of GraphDB(Neo4J) 
  • Experience with enterprise data management, Datawarehouse, data modelling, Business Intelligence, data integration. 
  • Expertise in SQL and stored procedure. 
  • Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms. 
  • Experience in working on Azure SQL Data warehouse (Synapse) and Azure analysis service or Redshift or Synapse 
  • Propose best practices/standards 
  • Translate, load and present disparate datasets in multiple formats/sources including JSON, XML etc. 
  • Should be able to provided scalable and robust solution architecture depending on the business needs.  
  • Should be able to compare tools and technologies and recommend a tool or technology  
  • Should be well versed with overall IT landscape, technologies and should be able to analyse how different technologies integrates with each other 

Call to action

If you're excited about the opportunity to be part of a team, committed to delivering amazing experiences to our customers – this could be the role for you!

___________________________

We’re committed to building a diverse and inclusive workforce in all its forms. We encourage applicants from diverse gender, cultural and linguistic backgrounds and applicants who may be living with a disability. We also offer flexibility in all our roles, to ensure everyone can participate.

To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit www.telstra.com.au/careers/diversity-and-inclusion.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture Athena Avro AWS Azure Big Data Bitbucket Business Intelligence Cassandra Cosmos DB Databricks Data management Data pipelines Data quality Data warehouse DevOps Engineering ETL Git GitHub GitLab Hadoop HBase JSON Lambda Machine Learning MongoDB Neo4j NoSQL Oozie Parquet Pipelines Power BI Privacy PySpark Python RDBMS Redshift Scala Security Spark SQL STEM Streaming TDD XML

Perks/benefits: Career development Competitive pay Flex hours Insurance Medical leave Startup environment

Region: Asia/Pacific
Country: India
Job stats:  3  1  1

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.