Data Engineer II

Bengaluru, India

REDICA Systems

Redica Systems provides Quality and Regulatory Intelligence to life sciences professionals. Request a demo today.

View company page

Company Description

Redica Systems is a SaaS start-up serving more than 200 customers within the life science sector, with a specific focus on Pharmaceuticals and MedTech. Embracing a hybrid model, our workforce is distributed globally, with headquarters in Pleasanton, CA. 

Redica's data analytics platform empowers companies to improve product quality and navigate evolving regulations. Using proprietary processes, we harness one of the industry's most comprehensive datasets, sourced from hundreds of health agencies and the Freedom of Information Act. 

Our customers use Redica Systems to more effectively and efficiently manage their inspection preparation, monitor their supplier quality, and perform regulatory surveillance.

More information is available at redica.com.

Job Description

We’re looking for an experienced Data Engineer to join our team as we continue to develop the first-of-its-kind quality and regulatory intelligence (QRI) platform for the life science industry.

The ideal candidate will come with experience in designing, building, and maintaining data pipelines and infrastructure maintenance while remaining hands-on in the code. 

Core Responsibilities 

  • Develop infrastructure optimized for efficient extraction, transformation, and loading of data from diverse sources
  • Maintain a full understanding of our technical architecture along with its distinct subsystems 
  • Proactively guide technical choices within your domain of expertise
  • Recommend and validate different ways to improve data reliability, efficiency, and quality 
  • Apply Agile Scrum methodology, with a primary focus on delivering sustainable, high-performance, scalable, and easily maintainable enterprise solutions
  • Prioritize and address technical challenges, working closely with engineering managers
  • Identify optimal approaches for resolving data quality or consistency issues
  • Ensure successful system delivery to the production environment and assist the operations and support team in resolving production issues, as necessary
  • Design, test, and maintain data stores, databases, processing systems, and microservices
  • Collaborate with NLP/ML teams to integrate data pipeline with NLP/ML services

About you

  • Tech Savvy: Effectively anticipates and adopts innovations in business-building technology solutions, staying up-to-date with data advancements and incorporating them into work processes
  • Manages Complexity: Actively synthesizes solutions from complex information by identifying patterns and developing effective problem-solving strategies to solve data-related problems effectively
  • Decision Quality: Consistently makes good and timely decisions that propel organizational progress and maintain data integrity
  • Collaborates: Actively engages in collaborative problem-solving by leveraging diverse perspectives and finding innovative solutions to achieve shared goals and data engineering initiatives
  • Optimizes Work Processes: Actively seeks opportunities to enhance and streamline current work processes for managing data pipelines, ETL (Extract, Transform, Load) processes, and data warehousing
  • Drives Results: Strives to continuously improve performance and exceed expectations to contribute to overall success and meet data-related deliverables
  • Strategic Mindset: Consistently demonstrates a strategic mindset by envisioning future possibilities and successfully translating them into breakthrough data strategies, contributing to the organization's long-term success
  • Engaged: Not only shares our values but also possesses the essential competencies needed to thrive at Redica

Qualifications

  • 3+ years of experience with an emphasis on code/system architecture and quality output
  • Experience designing and building data pipelines, data APIs, and ETL/ELT processes
  • Exposure to data modeling and data warehouse concepts
  • Hands-on experience in Python
  • Hands-on experience working with AWS Sagemaker and supporting the building of batch and real-time ML pipelines (AWS Sagemaker / MLflow)
  • Hands-on experience setting up, configuring, and maintaining SQL and no-SQL databases (MySQL/MariaDB, PostgreSQL, MongoDB, Snowflake)
  • Computer Science, Computer Engineering, or similar technical degree

Bonus Points

  • Experience with the data engineering stack within AWS is a major plus (S3, Lake Formation, Lambda, Fargate, Kinesis Data Streams/Data Firehose, DynamoDB, Neptune DB)
  • Experience with event-driven data architectures
  • Experience with the ELK stack is a major plus (ElasticSearch, LogStash, Kibana)

Additional Information

All your information will be kept confidential according to EEO guidelines. 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Architecture AWS Computer Science Data Analytics Data pipelines Data quality Data warehouse Data Warehousing DynamoDB Elasticsearch ELK ELT Engineering ETL Firehose Kibana Kinesis Lake Formation Lambda Logstash Machine Learning MariaDB Microservices MLFlow MongoDB MySQL NLP Pharma Pipelines PostgreSQL Python SageMaker Scrum Snowflake SQL

Perks/benefits: Startup environment

Region: Asia/Pacific
Country: India
Job stats:  3  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.