Software Engineer, Data Backend

Taipei, Taiwan

Applications have closed

Appier

Comprehensive AI-Powered Solutions: Smoother Operaions. Elevated Customer Experience. Better Performance.

View company page

About Appier 

Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information.

 

About the role

Appier’s solutions are powered by proprietary deep learning and machine learning technologies to empower every business to use AI to turn data into business insights and decisions. As a Software Engineer, Data Backend, you will be involved in helping to build critical components of this platform.

 

Responsibilities

 

  • Develop and operate data warehouse system and ETL pipeline for data access, collection, processing, and storage, and support data analysis tasks.
  • Manage deployment of the platform on public clouds with hundreds of instances across the globe.
  • Dedicate to Big Data and Machine Learning Platform using Apache Spark, Hudi, Trino and related technologies.
  • Responsible for laying the foundation for the platform as well as proposing solutions to ease software development, monitoring of software, etc.
  • Handling hundreds of terabytes of data.

 

About you

[Minimum qualifications]

  • BS/BA degree in Computer Science
  • 2+ years of experience in building and operating large-scale distributed systems or applications
  • Experience in building ETL pipeline with using Apache Spark
  • Experience in managing data lake or data warehouse
  • Expertise in developing data structures, algorithms on top of Big Data platforms
  • Ability to operate effectively and independently in a dynamic, fluid environment
  • Ability to work in a fast-moving team environment and juggle many tasks and projects
  • Eagerness to change the world in a huge way by being a self-motivated learner and builder

[Preferred qualifications]

  • 5+ years of experience in Internet Industry
  • Contributing to open source projects is a huge plus (please include your github link)
  • Experience working with Scala/Java is a plus
  • Experience with Hadoop, Hive, Flink, Presto/Trino and related big data systems is a plus  
  • Experience with Public Cloud like AWS or GCP is a plus 

 

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: AWS Big Data Computer Science Data analysis Data warehouse Deep Learning Distributed Systems ETL Flink GCP GitHub Hadoop Java Machine Learning Open Source Scala Spark

Region: Asia/Pacific
Country: Taiwan
Job stats:  10  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.