Technical Lead – Analytics Engineering and Data Quality
Sydney, Australia
Lendi Group
We make home loans simple by combining smart technology and expertise to find you a better solutionCompany Description
Lendi Group is the new group born out of the merger of Aussie Home Loans and Lendi. As a Group we operate a multi-brand strategy across Aussie Home Loans and Lendi, pulling together a truly personalised home loans experience for the Australian audience that balances both human support and smart technology.
We’re revolutionising the industry with best-in-class technology and a suite of tools that takes the hard work out of home loans – for our Customers and Brokers alike.
Diversity, Equality, and Inclusion are fundamental to the success of Lendi Group. We’re passionate about fostering an environment where our team members feel valued and supported and we celebrate everyone’s unique identity.
To learn more about life at Lendi Group check out our blog: https://www.lendi.com.au/inspire/category/life-at-lendi/
What’s the role?
As the Analytics Engineering and Data Quality Lead, you will be a key player in enhancing and uplifting our data platform and end-to-end data product delivery. Your expertise in Data Lake and/or Lakehouse environments, data migration, and data governance will be vital. You will be responsible for implementing core data governance practices, including data lineage, data observability, data quality, and anomaly detection. This role requires significant collaboration across cross-functional teams and accountability for critical data components. The successful candidate will have be a proactive self-starter with a passion for data-driven transformation. The role requires a curious mind with a background in the data engineering and analytics disciplines, problem solving skills, project and change management.
Job Description
Data Governance and Quality:
- Implement and oversee core data governance practices, including data lineage, data observability, data quality, and anomaly detection.
- Work with the General Manager of Data to embed appropriate privacy-preserving techniques and ensure compliance with internal policy, regulatory, and legislative obligations.
- Develop and maintain a Data Quality roadmap to improve data integrity and ensure the delivery of quality data products to customers.
Data Platform Enhancement:
- Support initiatives for data migration, platform enhancement, and uplift.
- Support administration and maintainence of the data stack, including Snowflake, DBT, and Fivetran.
ETL Pipeline Management:
- Set up and maintain ETL pipelines using AWS tooling (S3, EC2, Parameter Store, boto3) and technologies like Snowflake, dbt-cli, Airflow, Python, and Fivetran.
- Develop automated, scalable, and reliable data solutions as a key member of the Data Pipelines team.
Data Modelling:
- Build and manage a Kimball model data warehouse.
- Develop and optimize PowerBI semantic models.
Operational Delivery:
- Support the operational delivery of data products, including PowerBI reports, ad-hoc analysis, and productionisation of AI/ML models
- Implementation of best-practice data engineering and devops standards
- Implementation of proactive data quality metrics, data anomoly and issue detection and business support, including issue and incident resolution.
Operational Efficiency:
- Identify opportunities to enhance and optimise data engineering processes to ensure scalability and automation.
Solution Architecture:
- Support end-to-end solution design and decision making processes, through documentation and implementation of best practice processes and methodologies
Collaboration and Communication:
- Collaborate as part of a cross-functional agile team.
- Communicate effectively with both technical and non-technical audiences, both verbally and in written form.
- Take ownership of the systems you build, including supporting, debugging, and monitoring in production.
Continuous Improvement:
- Help the team continuously improve its ways of working and processes.
- Stay up-to-date with the latest industry trends and best practices in data engineering and governance.
Qualifications and Experience:
- Bachelor’s degree in computer science (or other relevant degree)
- 10+ years across data analytics/ data engineering in relevant industries e.g. financial services
- Proven experience in Data Lake and/or Lakehouse environments.
- Experience with integration of external, third party data assets and creation of B2C data products.
- Enjoy operating in an agile software development environment, working closely with development teams and business stakeholders
- Strong experience in data migration and data platform enhancement.
- Experience with development and productionisation of reports in PowerBI
- Solid understanding and implementation of data governance practices, including data lineage, data observability, data quality, and anomaly detection.
- Proficient in setting up and maintaining ETL pipelines with technologies like Snowflake, dbt-cli, Airflow, Python, Fivetran, and AWS tools (S3, EC2, Parameter Store, boto3).
- Strong data modelling skills and experience with Kimball model data warehouses.
- Excellent communication skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders.
- Experience working in an agile environment and a strong team player.
- Results driven with a desire to search for, and execute, on new opportunities to drive business value.
Qualifications
- 10+ years across data analytics/ data engineering in relevant industries e.g. financial services
- Proven experience in Data Lake or Lakehouse environments.
- Experience with integration of external, third party data assets and creation of B2C data products.
- Strong experience in data migration and data platform enhancement, development and productionisation of reports in PowerBI
- Solid understanding and implementation of data governance practices.
- Proficient in setting up and maintaining ETL pipelines with technologies like Snowflake, dbt-cli, Airflow, Python, Fivetran, and AWS tools (S3, EC2, Parameter Store, boto3).
- Strong data modelling skills and experience with Kimball model data warehouses.
Additional Information
What's in store for you at Lendi?
- A dynamic and innovative work environment.
- Opportunities for professional growth and development.
- The chance to make a significant impact on the future of AI at Lendi Group.
We support our people in a variety of ways, but a few of the benefits that that out people rave about include:
- We're Flex First – We've embraced a hybrid, flexible working model that adapts to team members needs and lifestyles
- A vibrant, relaxed, yet professional culture
- Wellness initiatives with a strong focus on psychological safety
- We offer generous Paid Parental Leave: we celebrate our growing Lendi Group family with 18-26 weeks leave for primary carers and up to 4 weeks for secondary carers
- An additional week’s Loyalty Leave each year after reaching 3 years’ service
- A 24/7 Employee Assistance Program service providing counselling and support
We’re committed to building a diverse and inclusive community for everyone at Lendi Group. Creating a culture that is representative of the world we live in makes us better people and our organisation gains tangible benefits, ultimately this creates a melting pot of creativity.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow Architecture AWS Computer Science Data Analytics Data governance Data pipelines Data quality Data warehouse dbt DevOps EC2 Engineering ETL FiveTran Machine Learning ML models Pipelines Power BI Privacy Python Snowflake
Perks/benefits: Career development Flex hours Parental leave Wellness
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Research Scientist jobs
- Open Data Science Manager jobs
- Open Junior Data Analyst jobs
- Open Business Data Analyst jobs
- Open Data Scientist II jobs
- Open Principal Data Scientist jobs
- Open BI Analyst jobs
- Open Sr Data Engineer jobs
- Open Business Intelligence Engineer jobs
- Open Data Science Intern jobs
- Open Sr. Data Scientist jobs
- Open Senior Business Intelligence Analyst jobs
- Open Lead Data Analyst jobs
- Open Software Engineer, Machine Learning jobs
- Open Azure Data Engineer jobs
- Open Junior Data Scientist jobs
- Open MLOps Engineer jobs
- Open Manager, Data Engineering jobs
- Open Data Engineer III jobs
- Open Data Analytics Engineer jobs
- Open Marketing Data Analyst jobs
- Open Junior Data Engineer jobs
- Open Data Engineering Manager jobs
- Open Product Data Analyst jobs
- Open Data Analyst II jobs
- Open Privacy-related jobs
- Open GCP-related jobs
- Open Tableau-related jobs
- Open Excel-related jobs
- Open ML models-related jobs
- Open Data pipelines-related jobs
- Open APIs-related jobs
- Open PhD-related jobs
- Open PyTorch-related jobs
- Open Finance-related jobs
- Open LLMs-related jobs
- Open TensorFlow-related jobs
- Open Deep Learning-related jobs
- Open Data visualization-related jobs
- Open Consulting-related jobs
- Open Business Intelligence-related jobs
- Open Generative AI-related jobs
- Open CI/CD-related jobs
- Open Data governance-related jobs
- Open NLP-related jobs
- Open DevOps-related jobs
- Open Kubernetes-related jobs
- Open Docker-related jobs
- Open Git-related jobs
- Open Hadoop-related jobs