Senior Data Engineer

Website mongodb MongoDB


You have experience with:

  • managing complex data processing projects using various frameworks like Spark
  • several programming languages (Python, Scala, Java, etc.)
  • streaming data processing frameworks like Kafka, KSQ,  and Spark Streaming
  • time management and making realistic assessments of project complexity
  • mentoring/training other engineers 
  • a diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
  • different storage formats like Parquet, Avro, Arrow, and JSON
  • AWS services such as EMR, Lambda, S3, Athena, Glue, IAM, RDS, etc.
  • orchestration tools such as Airflow, Luiji, Azkaban, Cask, etc.
  • Git and Github
  • CI/CD Pipelines


  • Evaluate cutting edge technologies for possible incorporation into our platform’s architecture 
  • Enjoy wrangling huge amounts of data and exploring new data sets
  • Value code simplicity and performance
  • Obsess over data: everything needs to be accounted for and be thoroughly tested
  • Plan effective data storage, security, sharing, and publishing within the organization
  • Constantly thinking of ways to squeeze better performance out of data pipelines

Bonus Points

  • You are deeply familiar with Spark and/or Hive
  • You have expert experience with Airflow
  • You understand the differences between different storage formats like Parquet, Avro, Arrow, and JSON
  • You understand the tradeoffs between different schema designs like normalization vs denormalization
  • In addition to data pipelines, you’re also quite good with Kubernetes, Drone, and Terraform
  • You’ve built an end-to-end production-grade data solution that runs on AWS
  • You have experience building machine learning pipelines using tools likeSparkML, Tensorflow, Scikit-Learn, etc.


As a Senior Data Engineer,  you will:

  • Estimate task complexity, report progress, and voice risks to peers and managers
  • Both learn from and teach peers and junior engineers
  • Develop and maintain expertise in big data best practices
  • Design and build large-scale batch and real-time data pipelines with data processing frameworks like Spark on AWS
  • Help drive best practices in continuous integration and delivery
  • Help drive optimization, testing, and tooling to improve data quality
  • Collaborate with other software engineers, machine learning experts, and stakeholders, taking learning and leadership opportunities that will arise every single day

*MongoDB is an equal opportunities employer*

To apply for this job please visit

Please mention you found this job on to help us get more companies to post here 🙂