Data Engineer

Santiago de Querétaro, Querétaro, Mexico

Applications have closed

Charger Logistics Inc

View company page

Charger Logistics is a world class asset-based carrier. We specialize in delivering your assets, on time and on budget. With the diverse fleet of equipment, we can handle a range of freight, including dedicated loads, specialized hauls, temperature-controlled goods and HAZMAT cargo.

Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. Charger Logistics is seeking a well-rounded individual able to work in a fast-paced environment to join our team at the company’s office in Mexico.


Responsibilities:

  • Developing large & sophisticated web applications using a Python web framework (Django, Flask, etc)
  • Understanding in Object Oriented Python development
  • Design, implement, and maintain large-scale batch and real-time scalable data pipelines with complex data transformations.
  • Proficiency in web services and REST APIs
  • Relational database development, specifically MS SQL Server, and experience in handling large dataset transactions
  • Experience in HTML5, CSS3, and modern JavaScript frameworks

Requirements


Mandatory Skills

  • A background in development with practical experience in Python for at least 2 years;
  • Strong understanding of data structures, algorithms, development flows
  • Experience and interest in cloud platforms
  • Experience working with pandas or any other Machine Learning library on Python
  • Strong understanding in Object Oriented Python development
  • Relational database development, specifically MS SQL Server, and experience in handling large dataset transactions
  • Advanced english is a MUST

Will be a plus:

  • Proficiency in web services and REST API
  • ETL with integration services (SSIS)
  • Experience working with Kafka
  • Meaningful experience in extracting and loading data from multiple databases like RDBMS, NoSQL, and optionally BigData (Redshift, Snowflake, Hive)
  • Real-world experience in JVM languages (Java, Scala).
  • Working with data pipelines (Airflow, Argo Workflows, Kubeflow, etc).
  • DevOps experience
  • Test Automation Experience to help out on the other side if needed.
  • Experience in HTML5, CSS3, and modern JavaScript frameworks
    Experience with Apache server
  • Additional Skills & Qualifications

Benefits

We offer competitive pay package, savings fund, health benefits, food coupons, performance-based bonus, paid time-off, and Christmas bonus.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow APIs Data pipelines DevOps Django ETL Flask Java JavaScript Kafka Kubeflow Machine Learning MS SQL NoSQL Pandas Pipelines Python RDBMS Redshift REST API Scala Snowflake SQL SSIS

Perks/benefits: Career development Competitive pay Health care Salary bonus

Region: North America
Country: Mexico
Job stats:  7  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.