BigData DevOps Engineer

Hyderabad, India

Applications have closed

Experian

Experian is committed to helping you protect, understand, and improve your credit. Start with your free Experian credit report and FICO® score.

View company page

Company Description

Experian is the world’s leading global information services company. During life’s big moments — from buying a home or a car to sending a child to college to growing a business by connecting with new customers — we empower consumers and our clients to manage their data with confidence. We help individuals to take financial control and access financial services, businesses to make smarter decisions and thrive, lenders to lend more responsibly, and organizations to prevent identity fraud and crime.

We have 17,800 people operating across 44 countries, and every day we’re investing in new technologies, talented people and innovation to help all our clients maximize every opportunity. We are listed on the London Stock Exchange (EXPN) and are a constituent of the FTSE 100 Index.

Learn more at www.experianplc.com or visit our global content hub at our global news blog for the latest news and insights from the Group

Experian is the world’s leading global information services company. During life’s big moments — from buying a home or a car to sending a child to college to growing a business by connecting with new customers — we empower consumers and our clients to manage their data with confidence. We help individuals to take financial control and access financial services, businesses to make smarter decisions and thrive, lenders to lend more responsibly, and organizations to prevent identity fraud and crime.

We have 17,800 people operating across 44 countries, and every day we’re investing in new technologies, talented people and innovation to help all our clients maximize every opportunity. We are listed on the London Stock Exchange (EXPN) and are a constituent of the FTSE 100 Index.

Learn more at www.experianplc.com or visit our global content hub at our global news blog for the latest news and insights from the Group

Job Description

 

As a key aide to both the IT Infrastructure and Development teams, you will help support existing systems 24x7 and responsible for administering current Big Data environments.  The candidate will be responsible for managing BigData Cluster environments and will work with teammates to maintain, optimize, develop, and integrate working solutions for our big data tech stack. To support the product development process in line with the product roadmap for product maintenance and enhancement such that the quality of software deliverables maintains excellent customer relationships and increases the customer base. 

  

If you have the skills and “can do” attitude, we would love to talk to you! 

 

What you’ll be doing  

  • Responsible for implementation and ongoing administration of Hadoop infrastructure 

  • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments 

  • Expert knowledge with delivering Big Data Cloudera Solutions in the cloud with AWS. 

  • Deliver innovative CI/CD solutions using the most cutting-edge technology stack. 

  • Automating infrastructure and Big Data technologies deployment, build and configuration using DevOps tools. 

  • Hands-on day-to-day expert experience in administering a Cloudera cluster with Cloudera Manager, Cloudera Director, Cloudera Navigator 

  • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, HBase and Yarn access for the new users 

  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc. 

  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines 

  • Screen Hadoop cluster job performances and capacity planning 

  • Monitor Hadoop cluster connectivity and security 

  • Manage and review Hadoop log files, File system management and monitoring 

  • HDFS support and maintenance 

  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability 

  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required 

  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks 

  • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups 

  • Solid Understanding on premise and Cloud network architectures 

  • Additional Hadoop skills like Sentry, Spark, Kafka, Oozie, etc 

  • Advanced experience with AD/LDAP security integration with Cloudera, including Sentry and ACL configurations 

  • Ability to configure and support API and OpenSource integrations 

  • Experience working with DevOps environment, developing solutions utilizing Ansible, etc. 

  • Will collaborate and communication with all levels of technical and senior business management 

  • Will require on-call 24X7 support of production systems on a rotation basis with other team members 

  • Pro-actively evaluate evolving technologies and recommend solutions to business problems. 

 

Qualifications

  • Typically requires a bachelor's degree (in Computer Science or related field) or equivalent. 

  • 3+ years of Linux (Redhat) system administration 

  • 3+ years of Hadoop infrastructure administration 

  • Cloud Platforms IaaS/PaaS – Cloud solutions: AWS, Azure, VMWare 

  • Kerberos administration skills 

  • Experience with Cloudera distribution 

  • Good have knowledge on open-source configuration management and deployment tools such as Puppet or Chef and Shell scripting 

  • Must have knowledge on DevOps tools like Ansible and HAProxy. Automating deployments and monitoring/alerting tasks using Ansible. 

  • Advantage if working knowledge on terraform 

  • Working Knowledge of YARN, HBase, Hive, Spark, Flume, Kafka etc. 

  • Strong Problem Solving and creative thinking skills 

  • Effective oral and written communications 

  • Experience working with geographically distributed teams 

  • Bachelors or master’s degree in Computer Science or equivalent experience 

  • Knowledge and understanding of the business strategy and use of back-office applications. 

  • Ability to adapt to multi-lingual and multicultural environment, additional language skills are a bonus. 

  • Ability to handle conflicting priorities. 

  • Ability to learn. 

  • Adaptability. 

  • Receptive to change. 

  • Ability to communicate with business users at all levels 

  • Analytical skills 

  • Self-motivated and pro 

Additional Information

Experian Careers - Creating a better tomorrow together

Find out what its like to work for Experian by clicking here

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Ansible APIs Architecture AWS Azure Big Data Business Intelligence CI/CD Computer Science Data quality DevOps Engineering Hadoop HBase HDFS Kafka Linux Oozie Open Source Security Shell scripting Spark Terraform Testing

Region: Asia/Pacific
Country: India
Job stats:  7  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.