Pyspark & Databricks certified Developer

Mumbai, MH, India

PradeepIT Consulting Services Pvt Ltd

PradeepIT, supported by Asia's largest tech professional network, revolutionizing global talent acquisition. Discover the potential of hiring top Asian tech talents at ten times the speed, starting today!

View company page

About the job

Accelerate your career with PradeepIT

PradeepIT is one of the largest, globally recognized IT Consulting firm to connect India's deeply vetted talent team to global customer.

Were headquartered in Bengaluru, Silicon Valley of India. PradeepITs customers include SAP Lab, Bosch, Rolls-Royce, Daikin, Daimler and J&J and hundreds of other Fortune 500 companies and fast-growing startups.

With continuous hard work and working remotely by choice, PradeepIT is certified as a Great Place to Work! Trusted by leading brands and fortune 500 companies from around the world, we have achieved:

6+ Years of Experience

580+ Open source technology Consultant

120+ SAP Consultant

40+ Salesforce Consultant

60+ Adobe Consultant

100+ Mobility Consultant

890+ Clients in APAC, EMEA & USA

Our Beliefs

PradeepIT believes in connecting people across the globe and provide them an opportunity work on remotely. Being a people-first organization, PradeepIT constantly strives for individuals who won't just keep up, but break new ground, work with cutting edge technology and ramp-up their skills with course created by our Vertical Heads, Senior Architect for freely with help of PradeepIT Academy.

Responsibilites

  • Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks
  • Developing scalable and re-usable frameworks for ingesting of data sets
  • Integrating the end to end data pipleline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
  • Working with event based / streaming technologies to ingest and process data
  • Working with other members of the project team to support delivery of additional project components (API interfaces, Search)
  • Evaluating the performance and applicability of multiple tools against customer requirements



Requirements

  • Bachelors degree and at least one year of experience designing, developing, deploying and/or supporting data pipelines using Databricks
  • Expertise in designing and deploying data applications on cloud solutions, such as Azure or AWS
  • Hands on experience in performance tuning and optimizing code running in Databricks environment
  • Proficient in programming languages like Pyspark and Python
  • Good understanding of SQL, T-SQL and/or PL/SQL
  • Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment
  • Willingness to work on-site or remote, as needed


Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: APIs AWS Azure Big Data Consulting Consulting firm Databricks Data pipelines Open Source Pipelines PySpark Python Salesforce SQL Streaming T-SQL

Region: Asia/Pacific
Country: India

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.