Data Engineer

Remote - Mumbai, Maharashtra, India

Applications have closed

Packt

Packt is the online library and learning platform for professional developers. Learn Python, JavaScript, Angular and more with eBooks, videos and courses

View company page

Who are we?

Packt is a global, technical learning publisher with a turnover of £18m and over 400 staff based across the globe, primarily in India and the UK. Principally serving B2C customers across the world, its major clients also include Microsoft, IBM, and Oracle.

We are a leading publisher of technical learning content with the ability to publish books on new and emerging tech faster than any other publisher in the world. Our mission is to increase the shared value of deep tech knowledge. We are technical content engineers, and we work with some of the most interesting technical minds and ground-breaking creators on the planet.

What are we looking for?

We are looking for an engineer that enjoys all things data and wants to drive the Packt data platform forward. This should be someone who has a robust understanding of modern data platforms and has experience with creating data pipelines. Experience with ETL/ELT, data modelling and python is required to help optimize our data platform.

What are your expectations of me?

We expect experience with creating and maintaining data pipelines, including working with large/complex data sets to meet business requirements. The Data Engineer will support our software developers and data analysts on data initiatives and will ensure data delivery architecture is consistent throughout.

From a technical standpoint, we require a candidate that has demonstrable experience with SQL (mysql/postgres), Python + ELT/ETL tools. We currently use data ingestion tools such as Fivetran/Airbyte and dbt to load and transform data into our Redshift data warehouse and this role will give the right candidate the opportunity to implement best practice across these tools.

What are the role responsibilities?

  • Create and maintain optimal data pipeline architecture
  • Own and optimize the Packt Datahub on AWS Redshift
  • Develop and maintain ELT/ETL
  • Build and maintain data models in dbt
  • Optimise pipelines using Apache Airflow
  • Work closely with the BI team to support their data needs
  • Work with data stakeholders across business units to support their data needs


What skills are we looking for?

We’re looking for someone who has skills across the data spectrum but more importantly someone who is enthused by the prospect of picking up new technical skills as we develop a more sophisticated data platform. This person needs to have strong English and communication skills, with an eye for attention to detail.

For the day-to-day activities we are looking for someone with:

  • Advance working SQL knowledge and experience with relational databases
  • Experience with optimizing cloud data warehouses and implementing best practices in Amazon Redshift
  • Experience with data modelling principles, including star/snowflake schema and normalization
  • Experience with an ELT/ETL tool, such as Matillion, Airflow, Glue, .dbt etc.
  • Experience with Python programming including packages such as numpy, pandas, requests etc.
  • Comfortable using GitHub (PRs, branching etc.)


Why work at Packt?

We recognize how important our people are to the success of our business and acknowledge and support the things that are important to them. We focus on the full employee experience. We aim to create a positive working environment that supports personal and collective development – where our people can not only grow their skill sets, continuously challenge themselves, and develop their careers, but also work together as a team and with our customers.

We try to achieve this by:

  • Ensuring a positive working environment for all employees, supportive of flexible and remote work.
  • Supporting individual development through growth opportunities and internal mobility.
  • Encouraging social collaboration within teams and communities and knowledge-sharing.
  • Fostering a culture that supports a healthy work-life balance.

We seek to achieve this by:

  • Providing a broad range of benefits and initiatives.
  • Advocating a healthy lifestyle where wellbeing is seen as an essential responsibility.
  • Celebrating success stories and encouraging peer-to-peer recognition.
  • Fostering a culture of self-development.
  • Providing a work environment where the best idea wins, and challenge is encouraged

Benefits

You will also have access to a range of flexible benefits that are designed to help you make the most of your time with us.

  • Bonus scheme
  • 21 days holiday (plus bank holidays)
  • Pension contribution
  • Insurance
  • Flexible hours: choose how, when, and where you work.


Salary Range- Up to 15 Lakhs per year

Note:

Packt offers flexible work hours and part-time roles. If you are interested in this role on a reduced hours basis, we are happy to discuss that.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow AWS Data pipelines ELT ETL FiveTran GitHub Matillion MySQL NumPy Oracle Pandas Pipelines PostgreSQL Python RDBMS Redshift Snowflake SQL

Perks/benefits: Career development Flex hours Salary bonus Team events

Regions: Remote/Anywhere Asia/Pacific
Country: India
Job stats:  8  5  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.