Senior Data Engineer

Ho Chi Minh City, VN, 700000

GFT Technologies

Wir unterstützen unsere Kunden mit modernsten IT-Lösungen, Technologien und Beratung in der digitalen Transformation.

View company page

What do we do?

As a pioneer for digital transformation GFT develops sustainable solutions across new technologies – from cloud engineering and artificial intelligence to blockchain/DLT. With its deep technological expertise, strong partnerships, and comprehensive market know-how GFT offers advice to the financial and insurance sectors, as well as in the manufacturing industry. Through the intelligent use of IT solutions GFT increases productivity and creates added value for clients. Companies gain easy and safe access to scalable IT-applications and innovative business models.

 

Who are we?

Having started in Germany in 1987, GFT Technologies has grown to become a trusted Software Engineering and Consulting specialist for the international financial industry, counting many of the world’s largest and best-known Banks as our clients. We are an organization that empowers you to not only explore but raise your potential and seek out opportunities that add value. At GFT, diversity, equality, and inclusion are at the core of who we are. Ensuring a diverse and inclusive working environment for all communities is one of the main pillars of our diversity strategy, based on our core values and culture. We have been certified for 2022/23 as a ‘Great place to work’ in the APAC region. So, if you want to have the opportunity to work with an outstanding and progressive organization this position could be right for you.

 

Role Summary:
We are seeking a highly motivated Senior Data Engineer with minimum of 8 years of experience to join our dynamic data engineering team. As a Senior Data Engineer at GFT, you will play a pivotal role in designing, maintaining, and enhancing various analytical and operational services and infrastructure crucial for the organization's functions. You'll collaborate closely with cross-functional teams to ensure the seamless flow of data for critical decision-making processes.


Key Activities:

  • Data Infrastructure Design and Maintenance: Architect, maintain, and enhance analytical and operational services and infrastructure, including data lakes, databases, data pipelines, and metadata repositories, to ensure accurate and timely delivery of actionable insights.
  • Collaboration: Work closely with data science teams to design and implement data schemas and models, integrate new data sources with product teams, and collaborate with other data engineers to implement cutting-edge technologies in the data space.
  • Data Processing: Develop and optimize large-scale batch and real-time data processing systems to support the organization's growth and improvement initiatives.
  • Workflow Management: Utilize workflow scheduling and monitoring tools like Apache Airflow and AWS Batch to ensure efficient data processing and management.
  • Quality Assurance: Implement robust testing strategies to ensure the reliability and usability of data processing systems.
  • Continuous Improvement: Stay abreast of emerging technologies and best practices in data engineering, and propose and implement optimizations to enhance development efficiency.

 

Required Skills:

  • Technical Expertise: Proficient in Unix environments, distributed and cloud computing, Python frameworks (e.g., pandas, pyspark), version control systems (e.g., git), and workflow scheduling tools (e.g., Apache Airflow).
  • Database Proficiency: Experience with columnar and big data databases like Athena, Redshift, Vertica, and Hive/Hadoop.
  • Cloud Services: Familiarity with AWS or other cloud services like Glue, EMR, EC2, S3, Lambda, etc.
  • Containerization: Experience with container management and orchestration tools like Docker, ECS, and Kubernetes.
  • CI/CD: Knowledge of CI/CD tools such as Jenkins, CircleCI, or AWS CodePipeline.

 

Nice-to-have requirements:

  • Programming Languages: Familiarity with JVM languages like Java or Scala.
  • Database Technologies: Experience with RDBMS (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., DynamoDB, Redis).
  • BI Tools: Exposure to enterprise BI tools like Tableau, Looker, or PowerBI.
  • Data Science Environments: Understanding of data science environments like AWS Sagemaker or Databricks.
  • Monitoring and Logging: Knowledge of log ingestion and monitoring tools like ELK stack or Datadog.
  • Data Privacy and Security: Understanding of data privacy and security tools and concepts.
  • Messaging Systems: Familiarity with distributed messaging and event streaming systems like Kafka or RabbitMQ.

 

What can we offer you?

  • Competitive salary
  • 13th-month salary guarantee
  • Performance bonus
  • Professional English course for employees
  • Premium health insurance
  • Extensive annual leave
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Athena AWS Big Data Blockchain CI/CD Consulting Databricks Data pipelines Docker DynamoDB EC2 ECS ELK Engineering Git Hadoop Java Kafka Kubernetes Lambda Looker MySQL NoSQL Pandas Pipelines PostgreSQL Power BI Privacy PySpark Python RabbitMQ RDBMS Redshift SageMaker Scala Security Streaming Tableau Testing

Perks/benefits: Career development Competitive pay Salary bonus

Region: Asia/Pacific
Country: Vietnam
Job stats:  0  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.