Data Engineer

Hong Kong, Singapore

Applications have closed

About the Data Engineering Team

Let’s talk about data. Do you have experience building and maintaining data warehouses with big data technologies? Can you build data-intensive applications with Python, Java, or Scala? Have you managed cloud-native big data environments? If any or all of this applies to you, you may be just the Senior Data Engineer we’re looking for to join our fast-growing team. With opportunities in both Singapore and Hong Kong, we’re hiring Data Engineers that are an essential conduit to other engineering teams across our consumer-facing company, as well as our Data Insights team.

Our team acts as a central nexus to connect various data producers with consumers across the company. Our customers are:

  1. Other engineering teams across the company that produce or consume data that need to be combined with other data sources.
  2. Analysts on the Data Insights team.

We are accountable for delivering:

  1. A centralized data warehouse that enables engineers and analysts across the company to ingest, anonymize, and enrich with other data sources from anywhere else in the company, persist, analyze, purge, and otherwise process their data.
  2. Tools, training, and coordination.
  3. Data applications that don’t fall into any one business unit, or where the business units don’t have sufficient capabilities themselves. For example, we team up with the data-insights team to build and operate churn-prediction models used by both humans and other systems at scale.
  4. Data Catalog for documenting the sources of data and what is available for use by other teams.

Our responsibilities include:

  1. Building and operating the data platform service, including defining and tracking its SLA.
  2. Guiding various engineering teams to design models and schemas of the data to be fed into the platform, making sure they can be processed in a scalable way and used by analysts efficiently.
  3. Guiding data analysts on the use of the data platform.
  4. Building libraries/modules and reference implementations of data ingesters on several common tech stacks.
  5. Guarding user privacy. While all teams are responsible for ensuring compliance of their work with our privacy policy, our team also has a veto right against processing any data that might not be compliant.
  6. Partnering with other teams on projects to build data engineering solutions, such as for churn-prediction, payment fraud management, and other company-wide challenges.

Other notes about our team:

  • Our tech stack currently mostly focuses on AWS Redshift, Google BigQuery, Apache Airflow and Tableau, but we imagine it will evolve significantly over time.
  • We have an ever-expanding range of engineering roles on the team, covering people with backgrounds in software development, infrastructure operations, and data science.

Job Responsibilities

Your responsibilities will include:

  1. Understand the needs of your internal customers, and convert them to optimized and maintainable tech designs.
  2. Use your data engineering skills to design and build the ingestion, processing, storage and consumption system for data to enable other business units to make business and operational decisions using data.
  3. Maintain and operate the data platform, which many business units rely on to fulfill their service level targets.

Role Requirements

  • At least 2 years of experience designing and operating data pipelines and databases
  • Proficiency in Python, Java, or Scala with a good understanding of runtime complexities
  • Proficiency in database operation and optimization, including SQL optimization
  • Strong understanding of and experience in big data tools such as Hadoop, Spark, Flink, Storm
  • Experience in testing ETL pipelines
  • Experience in building and operating data applications in cloud environments (AWS, Azure or GCP)
  • Experience in automation tools like Ansible and Terraform is a big plus
  • Strong written and verbal English communication skills

What we can offer you

  • Full-time employment with flexible working hours
  • Challenging work in a fun and collaborative environment
  • Attractive compensation and time-off benefits
  • Spacious open-concept and centrally located offices
  • Financially successful and profitable company
  • Fully stocked pantry with healthy foods and fresh fruit
  • Team lunches and company events every quarter
  • Multicultural teams represented by 30+ nationalities

Note: please do not include any salary information and submit your resume in PDF format.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Ansible AWS Azure Big Data BigQuery Data pipelines Data warehouse Engineering ETL Flink GCP Hadoop Pipelines Privacy Python Redshift Scala Spark SQL Tableau Terraform Testing

Perks/benefits: Flex hours Flex vacation Team events

Region: Asia/Pacific
Countries: Hong Kong Singapore
Job stats:  10  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.