Staff Site Reliability Engineer (Hadoop, SRE, DevOps, Big Data, 8+ yrs)

Bengaluru, India

Visa

Das digitale und mobile Zahlungsnetzwerk von Visa steht an der Spitze der neuen Zahlungstechnologien für die neue Zahlung, elektronische und kontaktlose Zahlung, die die Welt des Geldes bilden

View company page

Company Description

Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid.

Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.

Job Description

 

Single window support:

Leverage deep understanding of Hadoop and its related tools specially Hive, SPARK, HDFS and do complete RCA be it platform or user code/config related.
System configuration:

Recommend necessary changes to the system to DAP platform engineering by checking system activity and user logs for triaging and troubleshooting.
Performance Tuning:

Direct team members on crafting efficient queries, leveraging expertise in performance tuning and optimization strategies for big data technologies.
Issue resolution across Tech teams:

Troubleshoot and resolve complex technical issues. Identify root causes, finding which Tech/Data platform team can fix it and coordinating among those teams.
Reliability engineering:

Creating reports to define performance and resolution metrics for proactively identifying issues and generating alerts.
Office hours and liaising:

Calls across regions in multiple time zones to ensure timely client delivery.
Knowledge cataloging and sharing:

Share knowledge and cross-train peers across geographic regions using Wikis and communications. Provide around issues/outages affecting multiple users.
Develop Standards:

The team would prepare standard configuration for a variety of VCA workloads to make the jobs run with optimal settings to maintain good cluster health while executing the jobs efficiently.

Automation of repetitive tasks to reduce manual effort and avoid Human errors.

Perform automation and selfheal as per the requirement.
Continuous Learning of VCA workload:

Continuously learn and stay updated with the changing nature of data science jobs to help improve Cluster utilization.

 

This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.

Qualifications

Basic Qualifications
7+ years of relevant work experience with a Bachelor’s Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD, OR 8+ years of relevant work experience.

Preferred Qualifications
7 or more years of work experience with a Bachelors Degree or 4 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD
Strong development skills on data pipelines using PySpark, Hive, Airflow.
Strong Troubleshooting and debugging skills.
Must have experience in tuning application performance on Hadoop platforms.
Hands on development experience in Python.
Hands on experience working as a Hadoop system engineer in managing Hadoop platforms.
Ability to solve complex production problems and debug code.
Experience working with scheduling tools (Airflow, Oozie) or building data processing orchestration workflows.
In depth knowledge on Hadoop eco-system/Architecture such as Zookeeper, HDFS, Yarn, HIVE and SPARK.
Understanding of security tools like Kerberos and Ranger.
Hands-on experience in debugging Hadoop issues both on platform and applications.
Understanding of Linux, networking, CPU, memory, and storage.
Knowledge on AI/ML is a plus.
Excellent written and verbal communication skills is a must have.
Enjoy working fast and smart, and able to grasp complex concepts and functionalities.
Good understanding of agile working practices and related program management skills.
Good communication and presentation skills with ability to interact with different cross-functional team members at varying levels

Additional Information

Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow Architecture Big Data Data pipelines DevOps Engineering Hadoop HDFS Linux Machine Learning Oozie PhD Pipelines PySpark Python Security Spark

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  6  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.