Technical Architect Databricks

Gurugram, Haryana, India

Applications have closed

Srijan Technologies

Srijan is a digital experience services company that helps global Fortune 500s to nonprofits build transformative digital paths to a better future.

View company page

Location: Gurugram,Haryana,India

Company Description 

Material is a modern marketing services firm powered by sophisticated analytics, deep human understanding and a specialized type of creativity, design thinking. Headquartered in LA, the company has grown consistently for nearly 50 years to 1200+ employees in 20 offices including NY, Chicago, San Francisco, Austin, and London. Material has actively acquired a portfolio of businesses to build a unique, integrated marketing services business with key capabilities in analytics, intelligence, and experience for an array of top-tier clients in entertainment, tech, retail, healthcare, and packaged goods. 

We believe people are people, not data points. Diverse and inclusive problem-solving. Data and creativity is complementary and equally essential. Breaking down silos improves efficiency and creativity. Combining people, insights, and ideas uncovers hidden truths. Putting humans at the center of everything we do creates opportunities to improve lives. 

Material is a global strategy, insights, design, and technology partner to companies striving for true customer centricity and ongoing relevance in a digital-first, customer-led world. By leveraging proprietary, science-based tools that enable human understanding, we inform and create customer-centric business models and experiences + deploy measurement systems – to build transformational relationships between businesses and the people they serve. 

 

Data Architect

We are seeking a highly skilled and experienced Data Architect with a strong background in the retail domain and exceptional programming abilities. As a Data Architect, you will play a pivotal role in designing, implementing, and optimizing data architecture to support our retail business operations and analytics initiatives. Your expertise in Spark programming, optimization techniques, and familiarity with Databricks and CI/CD practices will be instrumental in ensuring the efficient and effective management of our data ecosystem.

 

Responsibilities:-

  • Collaborate with cross-functional teams to understand business requirements and translate them into scalable data architecture solutions
  • Design and develop data models, data integration processes, and data pipelines to capture, transform, and load structured and unstructured data from various retail sources.
  • Hands-on programming in Spark to develop and optimize data processing applications and analytics workflows.
  • Apply optimization techniques to enhance the performance and efficiency of data processing and analytical tasks.
  • Evaluate and implement appropriate tools and technologies, including Databricks, to streamline data operations and ensure scalability and reliability.
  • Work closely with data engineers to ensure data integrity, consistency, and accessibility across the organization.
  • Define and enforce best practices for data governance and data management, including data quality, metadata management, and data security.
  • Collaborate with DevOps teams to establish and maintain CI/CD pipelines for data engineering and analytics workflows.
  • Conduct regular performance monitoring and tuning of data systems to ensure optimal efficiency and stability.
  • Peer Review of team members' deliverables
  • Responsible for change and release management.
  • Stay updated with the latest advancements and trends in the retail domain, data architecture, and programming languages to drive continuous improvement.

 

Requirements:

  • At least 8+ years of experience in the data engineering domain 

  • Proven experience as a Data Architect, preferably within the retail industry.

  • Strong programming skills with expertise in PySpark programming and optimization techniques.

  • Hands-on experience with Databricks, Deltalake and its components for data processing and analytics.

  • Hands-on experience in data modelling, data integration, and ETL/ELT processes.

  • Experience in working with Gitlab pipelines and an in-depth understanding of CI/CD pipeline designs.

  • Experience with data governance, data quality, and metadata management.

  • Strong analytical and problem-solving abilities with a detail-oriented mindset.

  • Excellent communication and collaboration skills to work effectively with cross-functional teams.

  • Ability to adapt to a fast-paced and evolving environment while managing multiple priorities.

  • Good to have experience in at least one of the Cloud Vendor (AWS / Azure / GCP)



* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture AWS Azure CI/CD Databricks Data governance Data management DataOps Data pipelines Data quality DevOps ELT Engineering ETL GCP GitLab Pipelines PySpark Security Spark Unstructured data

Regions: Asia/Pacific Europe North America
Job stats:  2  0  0
Category: Architecture Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.