Senior Data Engineer

Sydney

SiteMinder

Grow hotel revenue with SiteMinder software for independents and multi-property groups: channel manager, booking engine, PMS integrations, demand plus and more.

View company page

At SiteMinder we believe the individual contributions of our employees are what drive our success. That’s why we hire and encourage diverse teams that include and respect a variety of voices, identities, backgrounds, experiences and perspectives. Our diverse and inclusive culture enables our employees to bring their unique selves to work and be proud of doing so. It’s in our differences that we will keep revolutionising the way for our customers. We are better together!
What We Do…
We’re people who love technology but know that hoteliers just want things to be simple. So since 2006 we’ve been constantly innovating our world-leading hotel commerce platform to help accommodation owners find and book more guests online - quickly and simply. We’ve helped everyone from boutique hotels to big chains, enabling travellers to book igloos, cabins, castles, holiday parks, campsites, pubs, resorts, Airbnbs, and everything in between. And today, we’re the world’s leading open hotel commerce platform, supporting 40,000 hotels in 150 countries - with over 100 million reservations processed by SiteMinder’s technology every year.
About the Senior Data Engineer role...
A key addition to our engineering team to assist us with our transformation program. This is for an all-rounder Data Engineer who can be dynamic and play a role in different aspects of working with data, from ingestion, setting pipelines to a ML engineer who can work with models such as selecting the right model, picking features and tuning parameters.
You will be someone who knows what good looks like and has set up robust production data pipelines using Databricks and associated products.

What you have...

  • 5+ years of solid experience as a Data Engineer with Machine Learning exposure.
  • Breadth of experience using different tools and techniques.
  • Extensive experience in a data-related role, e.g. working with Databricks in data warehousing and analytics contexts and creating automated ETL processes.
  • Streaming (Spark, Kafka or similar)
  • Python and/or Scala, relevant Python ML, libraries and SQL proficiency
  • Experience building robust production data pipelines using the best industry practices with experience of data preparation and resolution of data quality challenges.
  • Testing and validation of Data Models and Data Pipelines.
  • AWS experience across one or more of the following EC2, Kinesis, SQS, ElastiCache, Lambda and S3
  • Experience with infrastructure as code including Terraform or CloudFormation
  • Deep knowledge on how to structure, develop and maintain relational databases such as MySQL and showcase your experience using data modeling techniques.
  • Ability to work independently and also contribute to overall architecture and design.
  • "Data wrangler" mindset: embraces complexity and ambiguity, overcomes challenges and obstacles in order to deliver a working solution.
  • Decision-making: able to view a problem from multiple perspectives, select the best approach, and provide a concise rationale for the choice.
  • Outcome-focused: able to balance the need for a robust, elegant technical solution with the need for a market-ready solution.
  • Desirable: Experience building, administering and scaling ML processing pipelines preferable in Databricks.

What you'll do...

  • Refine, design, estimate and deliver product requirements particularly data based features.
  • Work with a cross-functional team in an agile methodology
  • Collaborate with others to design, develop and maintain highly scalable data pipelines that power data driven products & customer facing insights.
  • Working with cloud-based data storage and processing services to build scalable and cost-effective data solutions.
  • Create and optimize ETL processes to move and transform data from various sources into our data systems, ensuring data consistency and integrity.
  • Develop, construct, install, test and maintain data architectures (databases, large-scale processing systems, and data pipelines) that support data extraction, transformation, and loading (ETL) processes.
  • Adhering to the standards and data governance practices of the organisation, e.g. ensuring data quality, privacy, security, auditability and control throughout the data lifecycle.
  • Develop and maintain data models, schemas, and database designs to support business needs and reporting requirements.
Our Perks & Benefits…
- Equity packages for you to be a part of the SiteMinder journey - Hybrid working model (in-office & from home)- Mental health and well-being initiatives- Generous parental (including secondary) leave policy- Paid birthday, study and volunteering leave every year- Sponsored social clubs, team events, and celebrations- Employee Resource Groups (ERG) to help you connect and get involved - Investment in your personal growth offering training for your advancement
Does this job sound like you? If yes, we'd love for you to be part of our team! Please send a copy of your resume and our Talent Acquisition team will be in touch. When you apply, please tell us the pronouns you use and any adjustments you may need during the interview process. We encourage people from underrepresented groups to apply.
#LI-Hybrid
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture AWS CloudFormation Databricks Data governance Data pipelines Data quality Data Warehousing EC2 Engineering ETL Kafka Kinesis Lambda Machine Learning MySQL Pipelines Privacy Python RDBMS Scala Security Spark SQL Streaming Terraform Testing

Perks/benefits: Career development Health care Home office stipend Parental leave Team events

Region: Asia/Pacific
Country: Australia
Job stats:  1  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.