Jakarta Selatan, Jakarta, Indonesia
Xendit provides payment infrastructure across Southeast Asia, with a focus on Indonesia and the Philippines. We process payments, power marketplaces, disburse payroll and loans, provide KYC solutions, prevent fraud, and help businesses grow exponentially. We serve our customers by providing a suite of world-class APIs, eCommerce platform integrations, and easy to use applications for individual entrepreneurs, SMEs, and enterprises alike.
Our main focus is building the most advanced payment rails for Southeast Asia, with a clear goal in mind — to make payments across in SEA simple, secure and easy for everyone. We serve thousands of businesses ranging from SMEs to multinational enterprises, and process millions of transactions monthly. We’ve been growing rapidly since our inception in 2015, onboarding hundreds of new customers every month, and backed by global top-10 VCs. We’re proud to be featured on among the fastest growing companies by Y-Combinator.
We are scaling a data engineering team that works on structuring Xendit’s data in a way that useful products can be made.
We are looking for a Junior Data Engineer who will be a key part of our team and will help bring structure to vast amounts of data, making it digestible and build scalable data platforms that enable data products, business analytics, and data science. This role requires technical expertise and willingness to learn a wide variety of technologies to develop our batch and real-time data pipelines, data product APIs, and modern & scalable data infrastructure. If you are interested in working in a fast-paced environment and like being challenged with fun data problems to solve, come join us.
- Create and manage ETL data pipelines using Python, Spark, and Airflow.
- Create and manage real-time pipelines using Kafka.
- Improve and maintain the data lake setup (S3, EMR, Presto).
- Integrate data from 3rd party APIs (e.g. Hubspot, Facebook).
- Ensure data quality through automated testing.
- Develop and maintain company metrics and dashboards.
- Collaborate with analysts, engineers, and business users to design solutions.
- Research innovative technologies and make continuous improvements.
You may be a good fit if
- 1+ year as a data engineer developing and maintaining ETL pipelines.
- Experience in building data lake/warehouse solutions consisting of structured and unstructured data.
- Hands-on experience with big data technologies (e.g. Spark, Hive)
- Experience in writing and optimizing SQL queries.
- Good knowledge of Python.
- Hands-on experience with BI tools (e.g. Looker, Redash).
- Bachelor's degree in a technical field or equivalent work experience.
- Have experience managing and designing data pipelines, debugging data issues.
- Are familiar with real-time and/or large scale data.
- You have built data products that have scaled on AWS or another cloud.
- You thrive on nimble, lean, fast-paced startups, like autonomy, and have proven you can push towards a goal by yourself.
- Coachable. Able to own mistakes, reflect, and take feedback with maturity and a willingness to improve.
- You communicate with clarity and precision and you are able to effectively present results.
- Solve for the customer first: You build what customers want. You think about what is right for customers, not what is easiest for you
- Demonstrate mastery of honey badgery: You make ambitious goals. Then execute…no matter what stands in the way. When knocked down, you get up
- Take on challenges willingly and can be trusted to execute: You can be trusted to get things done right the first time quickly. You hit your deadlines
- You’re like us: You smile a lot, think work is fun and don’t take yourself too seriously. You measure yourself against the best and believe feedback is the breakfast of champions. You follow the golden rule.
- You’re remarkable: People naturally talk about how awesome you are. If we can’t find someone who raves about you then it’s unlikely we will too.