[GAC] Senior Data Engineer

Kraków, Poland

Software Mind

A software house that provides software development services to boost product engineering and digital transformation capabilities.

View all jobs at Software Mind

Apply now Apply later

Company Description

Software Mind develops solutions that make an impact for companies around the globe. Operating throughout Europe, the US and LATAM, our diverse team brings together a variety of skills, experiences and perspectives. Tech giants & unicorns, transformative projects, emerging technologies and limitless opportunities – these are a few words that describe an average day for us. Building cross-functional engineering teams that take ownership and crave more means we’re always on the lookout for talented people who bring passion and creativity to every project. Our culture is driven by trust – it embraces openness, acts with respect, shows grit & guts and combines employment with enjoyment.

Job Description

Project – the aim you’ll have 

We are looking for a strong and self-driven data engineer to contribute to our Data Intelligence engineering group by building new functionality, as well as enhancements  to existing data storage, pipelines and in-house data analysis tooling. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The problem areas span the scales from complex domain and data models with hundreds to hundred thousands of items in relational schemas for interactive work and background processing in SQL, ETL with thousands to millions of rows per day, to searches and statistics on billion-row size tables to big-data type stream processing and storage. We are a robust developer group with both full stack, back end, data engineering and data architecture resources working in a domain-driven area, collaborating closely with data analysts, domain specialist content curators, and do cross-functional collaborations to enable application teams with domain data and data integrations as well as data analysis.

Position – how you’ll contribute
 

  • Create and maintain data pipelines and collaborate on improving the data architecture.
  • Assemble or enhance large, complex data sets that meet functional and non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud native technologies.
  • Collaborate with architecture/design on unifying existing related but separate data models, pipelines and storage to enable unified data management and tooling.
  • Build analytics tools or microservices that utilize the data pipelines to provide actionable insights, operational efficiency or data sets usable in domain content or for analysts working on content.
  • Work with stakeholders including the Engineering management, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our and tenant data in our services separated and secure across cloud regions.
  • Extend existing or create new data tools where needed for analytics and data scientist team members that assist them in building and optimizing our products and domain content production.
  • Work with data and analytics experts to strive for greater functionality in our data systems and enhance overall data quality.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.

Qualifications

Expectations - the experience you need

  • 8+ years of experience in a Data Engineer role, or 5+ years and a strong portfolio of relevant projects and strong relevant skills,
  • Experience using the following software/tools: Microsoft SQL server, Microsoft Azure data-related infrastructure like ADF, Blob storage, Azure Functions, KeyVault, etc
  • .NET development in C#
  • Experience with stream-processing systems, preferably DataBricks, Spark or NATS Jetstream
  • Experience with data pipeline and workflow management tools
  • Experience with Docker/Kubernetes and containerized deployments
  • Preferably some experience with Python
  • Preferably some experience with AWS cloud services
  • Good communication skills in English, able to communicate effectively and ask questions

Additional skills - the edge you have
 

  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores, like DataBricks + Lakehouse NATS Jetstream, Kafka.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

Additional Information

Our offer – professional development, personal growth 

  • Flexible employment and remote work  
  • International projects with leading global clients 
  • International business trips  
  • Non-corporate atmosphere 
  • Language classes 
  • Internal & external training 
  • Private healthcare and insurance  
  • Multisport card 
  • Well-being initiatives 

Position at: Software Mind Poland

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Architecture AWS Azure Big Data Data analysis Databricks Data management Data pipelines Data quality Docker Engineering ETL Kafka Kubernetes Microservices Pipelines Python Spark SQL Statistics

Perks/benefits: Career development Flex hours

Region: Europe
Country: Poland

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.