TMP Data Engineer for a food delivery project

Remote - Poland

Applications have closed

Netguru

Europe’s finest custom software development company. More than 10 years of experience, over 630 developers and designers specializing in software development, mobile development and product design.

View company page

Join Netguru Talent Marketplace, a proven partner for tech-minded freelancers and experts. Thanks to us, you will have access to various project-based opportunities and can collaborate with different companies and industries. As a result, you will not only gain more experience but also develop a variety of skills you didn’t even know you had. Work the way you like, on your terms, with no strings attached.

We're developing 4 mobile food delivery applications for a multinational and innovative enterprise. For this client Netguru became a strategic development and design partner, taking ownership of 25+ projects so far, working on various products within the group. One of our teams has supported the company in optimizing the customer journey since the beginning of October 2020. Currently, we are looking for Data Engineers with excellent technical and soft skills to join the ongoing, long term projects for this client.

  • Required skills: Python, SQL, Spark/Glue, ETL (Airflow), Snowflake, English B2+ level.
  • Nice to have: AWS Redshift/GCP BigQuery, Scala, Hadoop, TB+, cloud, HDFS/Parquet/Avro.
  • We offer: 100% remote work, flextime & flexplace, dev-friendly processes, long-term collaboration


Depending on your skills, joining Netguru as a Data Engineer could mean:

Project 1

  • Working with the client’s Data Engineering and Data Science team (around 15 people) to build custom data pipelines to support 4,000+ users.
  • Building ingestion pipelines from multiple source systems.
  • Taking part in a company-wide analytical reporting redesign.
  • Working both with batch and streaming data of approximate proportion 60% and 40% respectively*.
  • Data-driven mindset - our clients require PoCs, data exploration/normalization, and expertise.
  • Monitoring data flows and making continuous improvements to data pipelines with custom Airflow operators.
  • *Good understanding of streaming data processing, experience with Google BigQuery, PubSub, AWS SNS, AWS Lambda, and data sources like Salesforce and Microsoft Dynamics are required for this position.


Project 2

  • Working with the client’s Data Engineering and Data Science team (around 15 people) to build custom data pipelines to support 4,000+ users.
  • Building the data model of customers from scratch (empty BigQuery project), data model adjustment to deliver required analytical features*.
  • Taking part in a company-wide analytical reporting redesign.
  • Data-driven mindset - our clients require PoCs, data exploration/normalization, and selection of the right technological stack.
  • Monitoring data flows and making continuous improvements to data pipelines with custom Airflow operators.
  • *Experience with Google BigQuery, and data sources like Salesforce and Microsoft Dynamics are required for this position.

Project 3

  • Working with the client’s product teams to build custom data pipelines*.
  • Data-driven mindset - our clients require PoCs, data exploration/normalization, and expertise.
  • Monitoring data flows and making continuous improvements to data pipelines with custom Airflow operators.
  • *Experience with Google BigQuery, and working with the client’s product teams to build custom data pipelines are required for this position.

Requirements

    • Are advanced in Python programming language (understanding: iterators, generators, exceptions, OOP, popular libraries for data engineering).
    • Have advanced SQL knowledge.
    • Are passionate about data and have computer science fundamentals.
    • Are running, maintaining and deploying your own code to production.
    • Have at least 3+ years of experience building data pipelines in a professional working environment.
    • Have experience with processing of large amounts of structured and unstructured data.
    • Have good understanding of the distributed and streaming data processing.
    • Have experience with Apache Spark or similar solutions.
    • Have experience with ETL (Airflow) or other data processing automation approaches.
    • Have a very good command of written and spoken English (at least upper-intermediate/B2+). Polish is not required.

We'll be happy to see that you have:

  • Experience with Google BigQuery, PubSub, AWS SNS, AWS Lambda.
  • Experience with data sources like Salesforce and Microsoft Dynamics.
  • Experience with Docker, Travis, Airflow, Terraform, Kubernetes.
  • Have practical knowledge of DevOps t.j. CI, CD, terraform, observability.
  • Can debug complex data infrastructures.

Benefits

  • working with an experienced, distributed team;
  • a mentor who will assist you during your first days;
  • possibility of a long-term collaboration on other challenging products in the future;
  • continuous development of your hard and soft skills.

Looking for a full-time job? Check out our Career Page and find out more about our open recruitment processes.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Avro AWS BigQuery Computer Science Data pipelines DevOps Docker Engineering ETL GCP Hadoop HDFS Kubernetes Lambda OOP Parquet Pipelines Python Redshift Scala Snowflake Spark SQL Streaming Terraform Unstructured data

Perks/benefits: Career development

Regions: Remote/Anywhere Europe
Country: Poland
Job stats:  4  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.