Data Engineer

Remote job

Applications have closed

BlueLabs

We are a remote startup and set out to build a next generation sports betting platform with a focus on performance, reliability, modularity and automation. Are you a Software Engineer and ready for a new challenge and the next step in your...

View company page

At BlueLabs we're combining the buzzing world of sports betting with modern tech and great engineering culture. We own our multi-tenant sports betting platform end-to-end. It consists of tens of microservices in a handful of decoupled domains, orchestrated by a terraform-provisioned Kubernetes cluster, achieving high scalability thanks to an event-driven architecture based on Apache Pulsar. We follow modern CI/CD and agile methodologies to deploy into production multiple times per day and use Datadog to monitor our infrastructure and applications.

We're now looking for a Data Engineer to join our Data Team. The team covers a wide range of skills to drive data-related initiatives and impact different parts of the organization. Our aim is to provide a solid data platform, discover insights, promote data-driven decisions, and collaborate with other teams to optimize, innovate, and enhance our services. The team's mission is to provide an ecosystem where data is transmitted, stored, processed, and analyzed in a fast, stable, reliable, and secure way. By applying data science on top of the gathered data and with ML-powered applications we want to give our company a competitive advantage.


About You

As a Data Engineer in our team, you’ll take part in the whole development lifecycle of a product. This involves identifying problems, designing solutions, implementing them, performing code reviews, and maintaining services in the production environment. Careful modeling of the data storage layer, ensuring reliable and swift message transfer, building high-performance data pipelines, and supporting Analytics and Data Science flows, are just some of the things you’ll face in your day-to-day work. We’re looking for a data generalist who is not afraid of the diverse challenges we will face while building the platform and truly enjoys working with data. You have good problem-solving skills, a tendency towards simple and effective solutions, and a “getting things done” mentality. Analytical thinking, troubleshooting skills, and attention to detail. You are a reliable, trustworthy person that keeps your promises. Interest in keeping yourself up to date and learning new technologies. Product-oriented mindset and eagerness to take part in shaping the products we build. Ability to work autonomously in a fully distributed team. Good communication skills in verbal and written English.


However, that's not all! At BlueLabs, we encourage you to contribute wherever your interests take you — and shape your role and our product accordingly.


Compensation

The compensation range for this role is €50,000 to €80,000 annually, depending on your skills and experience. We encourage you to read our Recruitment FAQs for further details. In addition to the monetary compensation, we provide several perks, including a shiny new MacBook 16" M1 Pro or Linux laptop.

Requirements

  • BS degree in Computer Science or similar technical field
  • 2+ years of professional software engineering experience
  • 1+ years of experience working with relational databases (Postgres, MariaDB, Oracle) and writing complex SQL queries
  • Deep understanding of modern back-end systems, microservices, message-driven architecture, distributed systems, and replication
  • Background in building data transformation pipelines - knowledge of DBT is highly appreciated
  • Understanding of data streaming concepts and technologies such as Kafka, Pulsar, and RabbitMQ
  • Familiarity with Agile methodology, containerization, continuous integration/deployment, cloud environment, and monitoring
  • Ability to write clean, efficient, maintainable, and well-tested code; Golang/Java/Python skills are a big plus

Nice to have

  • Experience with Data Warehouses (BigQuery, Snowflake, SingleStore), Data Lakes, NoSQL, Stream Processing (Apache Beam, Flink), Workflow Management Tools (Prefect, Airflow, Argo), BI Tools (Looker), or other Big Data solutions are highly appreciated
  • Knowledge of Terraform/IaC and Kubernetes
  • Experience setting up dashboards and alerting in Grafana/DataDog

Tags: Agile Airflow Big Data BigQuery CI/CD Computer Science Data pipelines Distributed Systems Engineering Flink Golang Grafana Kafka Kubernetes Linux Looker Machine Learning MariaDB Microservices NoSQL Oracle Pipelines PostgreSQL Pulsar Python RDBMS Snowflake SQL Streaming Terraform

Perks/benefits: Career development Competitive pay Gear

Region: Remote/Anywhere
Job stats:  9  3  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.