Data Engineer - APAC
Melbourne, Victoria, Australia - Remote
ROLLER Software
ROLLER offers an all-in-one, cloud-based venue management software solution to help attraction businesses grow and deliver great guest experiences.About ROLLER
ROLLER is a global software-as-a-service company designed to help businesses in the leisure and attractions industry operate more efficiently and deliver great guest experiences. ROLLER helps its customers through a full suite of venue management features, including ticketing, point-of-sale, CRM, self-serve kiosks, memberships, digital waivers, and more. We are a fast-growing global company with customers in over 30 countries and a wide array of industries like theme parks, museums, zoos, trampoline parks, water parks, aquariums, and wake parks - just to name a few!
At the heart of ROLLER is our team - which consists of 100+ highly energetic, driven, intelligent, and humble professionals, all contributing to help build a great and enduring business. We truly believe that the sky's the limit for us, and we are well on our way toward becoming a global success story. But most of all, we love what we do, and we are looking for like-minded people to join us on this amazing journey!
About the Role
We are on the hunt for a Data Engineer to join our high-performing team to work on our existing data platforms and work on providing reporting capabilities to various parts of the business working alongside various key stakeholders.
You are someone who has a passion for solving complex problems using data, being able to interact with business stakeholders, narrow down problem statements, solve problems for customers, and coordinating the development and release of features and fixes. Your strong technical skills combined with communication skills allow you to be part of something greater by creating and working on best in class data solutions.
What You’ll Do
- Design data pipelines including comprehending the organization's data requirements, and mapping out the flow of data from its sources to its destinations.
- Develop data pipelines which will entail creating code (in Python) to extract, convert, and load data into a data warehouse. It also entails creating data processing jobs and verifying that the data is handled correctly.
- Maintaining data pipelines include monitoring the pipelines to verify that they are functioning smoothly and effectively, resolving any issues that emerge, and improving performance as needed.
- Be able to make visual dashboards using business intelligence tool
- Working with cloud technology and making sure Data pipelines in AWS DMS and S3 buckets are working appropriately
- Ensuring data accuracy entails verifying and validating the data to guarantee that it is clean, accurate, and complete.
- Collaboration with stakeholders entails working closely with data scientists, business analysts, and other stakeholders to understand their data requirements and ensuring that the data is used successfully.
- Writing clean and maintainable code entails creating code that is simple to read and comprehend, as well as code that can be readily maintained and modified over time.
About You
Must Haves
- Bachelor’s or Master's degree in Computer Science, Information Technology or a related field
- You will have at least 1-2 years of experience using Databricks or other data warehousing technologies like Snowflake, Redshift, BigQuery or Athena
- Strong programming skills in Python and/or Scala as well as being familiar with modules like Pandas, NumPy and PySpark
- Experience with working with BI tools such as PowerBI, Tableau or Looker
- Excellent communication skills, the ability to work well within a team environment and ability to articulate data to key stakeholders
Nice to Haves
- You will be proficient in writing SQL queries and experience with .Net, SQL servers/SSMS/SSIS would be preferred
- Familiarity with cloud computing platforms such as AWS, Azure, or GCP
- AWS certification as solution architect associate or cloud practitioner
- Experience with accounting and/or degree will be highly regarded
- Experience with Artificial Intelligence (AI) or Machine Learning (ML) would be preferred but not essential. This role involves working with ML algorithms, so it would be great for someone who is interested in AI/ML
Perks!
- You get to work on a category-leading product that customers love in a fun, high-growth industry as well as getting to work in a driven, people-focused environment with leaders who look after their employees - check our Capterra and G2 reviews!
- We offer a work from home allowance to set your new workspace up!
- 4 ROLLER Recharge days per year (that is 4 additional days of leave that we all take off together as a team to rest and recuperate)
- Engage in our ‘Vibe Tribe’ - led by our team members; you can contribute to company-wide initiatives directly. Regular events and social activities, fundraising & cause-related campaigns... you name it. We're willing to make it happen!
- Team member Assistance Program to proactively support our team's health and wellbeing - access to coaching, education modules, weekly webinars, and more.
- 16 weeks paid Parental Leave for primary carers and 4 weeks paid Parental Leave for secondary carers
- Highly flexible work environment with an All Access pass to WeWork depending on your location
- Work with a driven, fun, and switched-on team that likes to raise the bar in all we do.
- Individual learning & development budget plus genuine career growth opportunities as we continue to expand!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Athena AWS Azure BigQuery Business Intelligence Computer Science Databricks Data pipelines Data warehouse Data Warehousing GCP Looker Machine Learning NumPy Pandas Pipelines Power BI PySpark Python Redshift Scala Snowflake SQL SSIS Tableau
Perks/benefits: Career development Flex hours Flex vacation Parental leave Startup environment Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Lead Data Analyst jobs
- Open Data Science Manager jobs
- Open MLOps Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Data Manager jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Power BI Developer jobs
- Open Product Data Analyst jobs
- Open Junior Data Scientist jobs
- Open Business Intelligence Developer jobs
- Open Data Scientist II jobs
- Open Senior Data Architect jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Business Data Analyst jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Data Quality Analyst jobs
- Open Principal Data Scientist jobs
- Open Data Product Manager jobs
- Open Azure Data Engineer jobs
- Open Junior Data Engineer jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Kubernetes-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs
- Open LLMs-related jobs
- Open Data warehouse-related jobs