Data DevOps Engineer

Saint Julian's, Saint Julian's, Malta - Remote

Applications have closed

Gaming Innovation Group

Trust GiGs award-winning iGaming Platform and Sportsbook solution to digitally transform your business in a multitude of regulated markets

View company page

The role

Our centralized Production Operations team is expanding and thus, we are keen to identify a Devops specialist who has a strong interest in Big Data.

GiG harnesses the power of Big Data across the iGaming ecosystem and this role would be key to help maintain and improve on the existing solution that is constantly evolving according to business requirements.


You’d be really excited to:

  • Script different isolated environments such as an environment for QA, Production, Staging and User Acceptance Testing Environments in an agnostic way.
  • Maintain and improve the governance of existing Data Stack, ranging from Data Engineering Tools, Data Science Modelling and also Business Intelligence reporting toolsets.
  • Develop processes for the development teams so that features can be automatically tested and merged to one code base.
  • Setting up and maintaining platforms such as ticketing systems to manage requests from different verticals within the company.
  • Responsible for administration of permissions throughout the team.
  • Doing POCs with different approaches for automation of deployments with cutting edge technologies such as Clickhouse, Apache Spark, Confluent platform (using mainly Avro, KSQL, Kafka Connect and Kafka), Apache Ignite, Apache NiFi and NiFi Registry.
  • Suggest tools for increased productivity, security, reliability, and performance.
  • Maintain and develop tools and agents such as Grafana for performance monitoring, security monitoring.
  • Use Grafana for monitoring our applications.
  • Help with CI/CD processes.
  • Maintain kubernetes clusters that support our Spark and Spring containerised applications.
  • Work with third parties for guidance and systems integration using best practices with tools such as Clickhouse or Kafka.
  • Provide guidance and direction to the development team and follow best practices.
  • Enhance current code stack to help with automation, versioning, tags and automation of Pipelines.

Who you are:

  • Hold a Bachelor’s Degree in Computer Science or equivalent
  • Experience in this role or automation with Apache Projects
  • Have worked with both Linux and Windows environments
  • A strong background in Big Data, technologies such as HDFS, Hadoop, HDFS, Apache Kafka, Apache Spark, Apache NiFi.
  • A strong understanding of DevOps methodologies and concepts.
  • Strong infrastructure knowledge on cloud platform providers such as Azure, GCP or AWS.
  • Fluent with UNIX.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Avro AWS Azure Big Data Business Intelligence CI/CD Computer Science DevOps Engineering GCP Grafana Hadoop HDFS Kafka Kubernetes Linux NiFi Pipelines Security Spark Testing

Regions: Remote/Anywhere Europe
Country: Malta
Job stats:  14  3  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.