SDE 1 / SDE 2 - Data Engineer

Ahmedabad

Applications have closed

Fynd

Founded in 2012, Fynd is a multiplatform tech company specializing in retail-tech solutions, empowering 2,300+ brands. Explore our work in Commerce, AI, Big Data, Payments, Gaming, NFT, and Learning

View company page

Fynd is India's largest omnichannel commerce platform helping retail businesses accelerate growth. Founded by Farooq Adam, Harsh Shah, and Sreeraman MG in 2012.
The company is headquartered in Mumbai and currently employs 350+ spread across design, engineering, data science, operations and sales. Trusted by over 600 brands and 10,000 stores.

Responsibilities

  • Develops and maintains efficient and scalable data pipelines for 1M/sec+ events
  • Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions
  • Build the solutions required for extraction, transformation, and loading of data from a wide variety of data sources using in trend stable technologies
  • Designs and evaluates open source and vendor tools for data lineage.
  • You have been personally designing and writing code for data-oriented scalable systems
  • Designs data integrations and data quality framework.
  • Ensure product quality by enforcing testing standards, measuring release defect rates, and leading other quality initiatives.
  • Performs data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
  • Collaborate with other team members toward shared product goals, and communicate team performance through shared metrics.
  • Utilize sound engineering practices to deliver functional, stable, and scalable solutions to new or existing problems.
  • Work in a fast-paced, flexible, and fun environment, with a talented, diverse, and forward-thinking team.

Qualification

  • Experience with object-oriented/object function scripting languages: Python or Java or Scala.
  • Advanced knowledge in SQL and data warehousing solutions like Bigquery, Athena, Redshift, or any other
  • Experience in building data pipeline architectures
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Experience with Apache Kafka and other data technologies like Kafka-Connect, Spark, Nifi is a plus
  • Experience in containerized technologies like Kubernetes, docker
  • Experience with any of cloud services like AWS, Google Cloud, or Azure
  • Experience in working with cross-functional teams to get requirements
  • Create and maintain optimal data pipeline architecture with multiple sources and destinations,
  • Exposure to messaging queues like Kafka or Kinesis.
  • Exposure to cloud data-warehousing solutions like BigQuery or Redshift or Snowflake.
  • Proficiency with database technology such as MongoDB or MySQL or Postgres.
  • Strong analytic skills related to working with unstructured datasets.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Athena AWS Azure Big Data BigQuery Data analysis Data pipelines Data Warehousing Docker Engineering GCP Google Cloud Kafka Kinesis Kubernetes MongoDB MySQL Open Source Pipelines PostgreSQL Python Redshift Scala Snowflake Spark SQL Testing

Perks/benefits: Flex hours Team events

Region: Asia/Pacific
Job stats:  3  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.