Data Engineer

Chicago, Illinois, United States

Full Time Mid-level / Intermediate USD 30K - 112K *
Dialexa logo

Dialexa

At Dialexa, we design and engineer award-winning technology products across mobile, web, Internet of Things (IoT) and embedded device platforms.

View all employer listings

Apply now Apply later

Dialexa is expanding its Data Engineering practice and is looking for great talent to join us in building intelligent, next-generation platforms. Are you tired of just pushing out ones and zeros and you yearn to solve hard problems? Do you want to work on an awesome team of data scientists and developers in a full agile process? Do you want your creative technical ideas to be listened to, heard, and implemented? Do you want to create applications that people love?

Dialexa is expanding and growing our reputation for high-quality end­-to-­end technology product development. We develop for a wide array of technology platforms and frameworks including mobile, web, Internet of Things (IoT), wearables, and embedded devices. We have an awesome culture that includes perks such as healthcare benefits, 401K, PTO, discounts for the gym, and much much more! To learn more about Dialexa, please visit www.dialexa.com.

Responsibilities

  • Execute projects collaborating with data scientists, data engineers, software engineers, and designers
  • Guide clients on pros and cons of approaches to solving business problems
  • Designing and building high-level architectures tailored to client needs
  • Write and monitor ETL processes using distributed computing techniques
  • Help drive business results and participate in practice and company growth initiatives
  • Analyze, explore and visualize data to produce actionable information

Expectations (the more the better - we want great talent!)

  • Bachelor’s Degree from a 4-year college/university with a relevant degree (computer science, engineering, etc) or equivalent experience and certifications
  • Portfolio of applications that clearly demonstrates coding proficiency, an eye for quality, an understanding of security, and attention to detail such as things like performance characteristics
  • Knowledge of common data engineering languages (e.g. Python, Scala, Java, R, etc.)
  • Experience with data lake platforms (e.g. Databricks, AWS Lake Formation, Azure Data Lake)
  • Experience with data warehouse technologies (e.g. Snowflake, Redshift, BigQuery)
  • Experience with or exposure to:
    • Cloud infrastructures, services, and certifications (e.g. AWS, Azure, Google Cloud)
    • Data modeling and testing frameworks (DBT, Deequ, Great Expectations)
    • Data and pipeline observability (Monte Carlo, Databand, Datafold)
    • Distributed processing (Spark, PySpark, Dask, Presto/Athena/Trino)
    • Job orchestration (Airflow, Prefect, Argo, Kubeflow, Azure Data Factory)
    • Devops and CI/CD tools (Git, Jenkins, GitHub CI/CD, CircleCI, Azure Pipelines, Terraform)
    • Messaging platforms (Kafka, Kinesis, GCS pub/sub, Azure Event Hub)
  • Must be comfortable being online and available approximately between 8AM to 6PM CST
  • Must be able to travel to Dallas, Texas or the client location as requested (at a minimum of once a quarter)

 

This is a remote eligible posting available everywhere in the United States except for Colorado.

 

* Salary range is an estimate based on our salary survey at salaries.ai-jobs.net
Job perks/benefits: 401(k) matching Startup environment
Job region: North America
Job country: United States
Job stats:  3  0  0
  • Share this job via
  • or

Other jobs like this

Explore more AI/ML/Data Science career opportunities

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.