Machine Learning / Data Engineer

Vietnam - Remote

Applications have closed


Support your customers on WhatsApp with the multiple agent customer support tool and WhatsApp Support CRM. Powered by WhatsApp Business API

View company page

WATI is an early-stage, venture-backed SaaS platform that is defining how companies communicate with their customers. Through our customer engagement software, built on top of WhatsApp’s Business API, businesses can easily engage with their customers in-real time - at scale!

We are growing fast, and we are now looking for a Machine Learning Data Engineer to work closely with Development, QA, DevOps, Support and Product. You will provide the necessary pipelines, infrastructure, and automation for Data Scientist to train and evaluate models and allow users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry.

What you’ll do:

  • Work with a team to design and build services that power industry-leading data products and transform data science prototypes
  • Build automation tools for scientists to deploy models
  • Develop, maintain and improve production machine learning applications according to requirements
  • Run machine learning tests and experiments
  • Work with peers on code reviews / PRs
  • Work across several teams responsible for different aspects of contribution to machine learning data at WATI

What will make you stand out:

  • Good knowledge of CI / CD and associated best practices
  • Familiarity with Docker and or Kubernetes based development and orchestration
  • Created automated / scalable infrastructure and pipelines for teams in the past
  • Contributed to the open-source community (GitHub, Stack Overflow, blogging)
  • Prior experience with the Big Data ecosystem like Spark
  • Prior experience in the chatbot, NLP or text mining fields
  • Good understanding of SOLID Principles


  • Substantial experience in message queue, stream processing in these languages data streaming architectures: Google Dataflow/Apache Beam, Apache Samza, etc
  • Good knowledge and experience using and optimizing these GCP services: Google BigQuery, Google Dataflow, Google Cloud Composer, Google Cloud Databases, Google Cloud data store
  • Understanding of traditional NLP algorithms, deep learning algorithms and state of the art pre-trained NLP Models
  • Significant programming experience in “either” Python, Java, shell script “or” other like languages
  • Strong database manipulation skills in SQL and MongoDB with cost optimization concern and data tool / library including Pandas, matplotlib, Pytorch, Hugging face
  • Have worked with large volumes of data in the past
  • Bachelor’s degree in Computer Science, Computer Engineering, or equivalent work experience
  • Excellent communication skills in English, both written and verbal

* Salary range is an estimate based on our salary survey 💰

Tags: APIs Big Data BigQuery Computer Science Dataflow Deep Learning DevOps Docker Engineering GCP GitHub Google Cloud Kubernetes Machine Learning Matplotlib MongoDB NLP Pandas Pipelines Python PyTorch Spark SQL Streaming

Regions: Remote/Anywhere Asia/Pacific
Country: Vietnam
Job stats:  38  10  1

More jobs like this

Explore more AI/ML/Data Science career opportunities

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.