Senior Data Engineer

Remote

Applications have closed

Uptake Technologies Inc.

Uptake is the industrial analytics platform that delivers products to major industries to increase productivity, security, safety and reliability.

View company page

What you'll do: 

As a Senior Data Engineer on the Uptake Hosted Delivery team, you'll work with Uptake's cross-functional Delivery practice & Product teams which includes data scientists, product owners, engineers and customer success managers, as well as interface directly with a team of offshore engineering resources. 

You will design, build & test data ingestion workflows using ETL tools, and python and/or Java. The ideal candidate will have a strong analytic and technical background, as well as the ability to be flexible and adaptive to rapidly evolving needs of the team. 

You will, over time, assume ownership of all ingestion activities for customer deliveries across Uptake’s product suite. 

Key Responsibilities: 

  • Design and implement data ingestion workflows, real-time ETL and batch processing of data to support data science models 
  • Work with customer success/support teams and with data science teams to develop data expertise and resolve issues relating to data quality 
  • Define best practices and design for the management of data 
  • Partner with Data Scientists to build and maintain internal data modeling, processing and visualization tools 
  • Translate requests from customers to technical solutions for data 
  • Work with product for writing stories, with offshore team members to guide and direct them. Own data ingestion for Uptake. 

Qualifications: 

  • Minimum 5 years experience working as a Data Engineer 
  • Ability to write efficient SQL queries 
  • Ability to architect data solutions 
  • Experience managing data ETL processes and making data available through service applications and databases. 
  • 1+ years experience with NoSQL databases 
  • 3+ years experience with programming languages (Python, Java, and/or Scala preferred) ● Familiarity with a variety of data processing technologies 
  • Ability to debug data related issues quickly and fix them efficiently. 
  • Excellent communication skills, including documentation 
  • Experience with REST APIs and making data available through microservices. ● Experience using version control (Git, Mercurial, SVN, etc.) for collaborative code development. ● Familiarity with of microservice architecture and Docker 

Preferred: 

  • MS in Computer Science or other similar technical field 
  • Experience working in a cloud-native AWS environment, using managed services, especially work in AWS 
  • Some knowledge of machine learning and data science processes

Tags: APIs AWS Computer Science Docker Engineering ETL Git Machine Learning Microservices NoSQL Python Scala SQL

Perks/benefits: Flex hours

Region: Remote/Anywhere
Job stats:  15  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.