Senior Data Architect

Gurgaon, Uttar Pradesh, India

Applications have closed

Srijan Technologies

Srijan is a digital experience services company that helps global Fortune 500s to nonprofits build transformative digital paths to a better future.

View company page

Location: Gurgaon,Uttar Pradesh,India

About Material

Material is a global strategy, insights, design, and technology partner to companies striving for true customer centricity and ongoing relevance in a digital first, customer-led world. By leveraging proprietary, science-based tools that enable human understanding, we inform and create customer-centric business models and experiences + deploy measurement systems – to build transformational relationships between businesses and the people they serve.

About Srijan

Srijan is a global engineering firm that builds transformative digital paths to better futures for Fortune 500 enterprises to nonprofits all over the world. Srijan brings advanced engineering capabilities and agile practices to some of the biggest names across FMCG, Aviation, Telecom, Technology, and others. We help businesses embrace the digital future with cloud, data, API and platform centric technologies and adapt to changing business models and market demands. Srijan leads in Drupal with 350+ Drupal engineers, 80+ Acquia certifications. Srijan is also a Drupal Enterprise Partner & Diamond Certified

 

What you will get:

  1. Competitive Salaries with flexi benefits 
  2. Group Mediclaim Insurance and Personal Accidental Policy
  3. 30+ Paid Leaves in a year 
  4. Learning and Development of quarterly budgets for certification

Note: By applying to this position you will have an opportunity to work from your preferred Hybrid working location from the following: Bengaluru, Delhi, Gurgaon, Kolkata,Chennai,Hyderabad,Pune,Indore,Jaipur and Ahmadabad

 

Job description

  • Experience in service architecture, development, and high performance and scalability.
  • Experience in Spark, SQL performance tuning, and optimization.
  • Experience with architectural design and development of large-scale data platforms and data applications.
  • Good hands-on experience in AWS
  • In-depth understanding of Spark Hive Frameworks and their internal architecture
  • Strong programming background with Java / Python.
  • Practical exposure to end-to-end design and implementation process of Near-Real-Time and Batch Data Pipelines.
  • Strong SQL (Hive/Spark) skills and experience tuning complex queries
  • Excellent understanding of AWS storage, and its compute services. Able to effectively use of AWS managed services - Step function, EMR, Lambda, Glue and Athena.
  • Hands-on experience on Data Lake and ETL pipeline development
  • Expertise on designing and building new Cloud Data platform and its optimization at organization level.
  • Hands-on experience in Big Data technologies - Hadoop, Sqoop, Hive and Spark including DevOps.

Must Have:

  • 12-15 years of big data technologies or data platform architecture experience with deep technology expertise in Hive, HDFS, Spark, Kafka, Java, or Scala,Python, Pyspark etc.
  • Experience in service architecture, development, and high performance and scalability.
  • Experience in Spark, SQL performance tuning, and optimization.
  • Experience with architectural design and development of large-scale data platforms and data applications.
  • Expertise in design and management of complex data structures and data processes like ETL/ELT.
  • Expertise in managing and operating distributed big data systems, including but not limited to the Hadoop ecosystem.
  • A deep understanding of issues in multiple areas such as data acquisition and processing, data management, distributed processing, and high availability is required.
  • Knowledge of Teradata.
  • Expertise on designing and building new Cloud Data platform and it s optimization at organization level.
  • Strong past experience on designing AWS data lake and surrounding ecosystem development.

Good to have

  • Understanding of sage maker and ML algorithms
  • Experience in migrating workloads from on-premise to cloud and cloud to cloud migrations
  • Experience on AWS services like RDS, DynamoDB, Redshift
  • Ability to drive the deployment of the customers workloads into AWS and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for AWS cloud implementations.
  • Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies.
  • Act as a subject-matter expert OR developer around AWS and become a trusted advisor to multiple teams.
  • Coach and mentor engineers to raise the technical ability of the rest of the team, and/or to become certified in required AWS technical certifications

 

Apply to this job

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Architecture Athena AWS Big Data Data management Data pipelines DevOps DynamoDB ELT Engineering ETL Hadoop HDFS Java Kafka Lambda Machine Learning Pipelines PySpark Python Redshift Scala Spark SQL Teradata

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  5  0  0
Category: Architecture Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.