Big Data Engineer

Bengaluru, Karnataka, IN, 560071

NetApp

Turn a world of disruption into opportunity with intelligent data infrastructure from NetApp. Realize seamless flexibility—any data, any workload, any environment—with the only enterprise-grade storage service embedded in the world’s biggest...

View company page

About NetApp

We’re forward-thinking technology people with heart. We make our own rules, drive our own opportunities, and try to approach every challenge with fresh eyes. Of course, we can’t do it alone. We know when to ask for help, collaborate with others, and partner with smart people. We embrace diversity and openness because it’s in our DNA. We push limits and reward great ideas. What is your great idea?

"At NetApp, we fully embrace and advance a diverse, inclusive global workforce with a culture of belonging that leverages the backgrounds and perspectives of all employees, customers, partners, and communities to foster a higher performing organization." -George Kurian, CEO

Job Summary

As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. 
The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”
You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design, and development and testing of code. The software applications you build will be used by our internal product teams, partners, and customers.
We are looking for a hands-on lead engineer who is familiar with Spark and Scala, Java and/or Python. Any cloud experience is a plus. You should be passionate about learning, be creative and have the ability to work with and mentor junior engineers.
 

Job Requirements

Your Responsibility 
•    Design and build our Big Data Platform, and understand scale, performance and fault-tolerance
•    Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. 
•    Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums 
•    Build and deploy products both on-premises and in the cloud
•    Work on technologies related to NoSQL, SQL and in-memory databases
•    Develop and implement best-in-class monitoring processes to enable data applications meet SLAs 
•    Should be able to mentor junior engineers technically. 
•    Conduct code reviews to ensure code quality, consistency and best practices adherence. 

 

     Our Ideal Candidate 
•    You have a deep interest and passion for technology
•    You love to code. An ideal candidate has a github repo that demonstrates coding proficiency
•    You have strong problem solving, and excellent communication skills
•    You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities

Education

•    5+ years of Big Data hands-on development experience 
•    Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. 
•    Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built
•    Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) 
•    Experience with one or more of Python/Java/Scala 
•    Proven, working expertise with Big Data Technologies Hadoop, HDFS, Hive, Spark Scala/Spark, and SQL 
•    Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage
 

Did you know…
Statistics show women apply to jobs only when they’re 100% qualified. But no one is 100% qualified. We encourage you to shift the trend and apply anyway! We look forward to hearing from you.

Why NetApp?

In a world full of generalists, NetApp is a specialist. No one knows how to elevate the world’s biggest clouds like NetApp. We are data-driven and empowered to innovate. Trust, integrity, and teamwork all combine to make a difference for our customers, partners, and communities. 
 
We expect a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off per year to volunteer with their favorite organizations.  We provide comprehensive medical, dental, wellness, and vision plans for you and your family.  We offer educational assistance, legal services, and access to discounts. We also offer financial savings programs to help you plan for your future.  
 
If you run toward knowledge and problem-solving, join us. 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Big Data Cassandra Data governance Data quality Engineering GitHub Hadoop HDFS Java Kafka Kubernetes Machine Learning NoSQL Open Source Pipelines Python R R&D Research Scala Security Spark SQL Statistics Testing

Perks/benefits: Career development Health care

Region: Asia/Pacific
Country: India
Job stats:  2  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.