Director, Data Engineering

Dallas, Texas, United States - Remote

Brado

Transform how your brand engages people on their most important journeys.

View company page

About us:

Brado is a digital marketing agency reinventing the way healthcare brands engage with people. Driven by insight, we offer precision engagement solutions that produce superior returns for our healthcare clients and better experiences for their healthcare customers. 

Our Values: At Brado, we value the individual. We believe work and life can be synergistic and should not be at odds. The joy and renewal you get from each source must fuel the other. We have and will continue to cultivate a team who celebrates our diversity of thoughts, beliefs, backgrounds, and lifestyles. We are driven by our passion to do great work with great clients that are truly changing lives.

 The Role:

 The Director of Data Engineering, owns the data strategy, architecting the right data platform, and serving analytics ready data products to meet business needs. They lead the development of data pipelines and data products necessary to allow analysts, AI/ML Engineers, and data integrators across Brado’s clients to accomplish their goals. They contribute to the vision for developing our modern data infrastructure on our Microsoft Azure cloud platform. They work closely with fellow engineers, data scientists, and reporting and measurement specialists to establish best practices for creating systems and data products that the business will use. They possess deep technical skills, are comfortable owning the data strategy and data infrastructure, and are excited about building a strong data foundation for the company. The Director of Data Engineering leads a small team of other data engineers and grows junior talent to the best of their capacity. 

 Ideal candidates for this role will live in the St. Louis, MO or Dallas/Ft. Worth TX areas. While our day-to-day work is done remotely, our teams gather in person as needed for the work.

Key Areas of Responsibility  

  • Designs and implements scalable data pipelines and analytics solutions on the Data Bricks platform.
  • Designs and implements scalable, high-performance data architectures. Understands data modeling techniques, including relational, dimensional, and NoSQL data models.
  • Builds and manages data warehouses using technologies like Amazon Redshift, Google BigQuery, or Snowflake. Optimizes data warehouse performance and cost management.
  • Integrates data from core platforms like Marketing Automation, CRM, and Analytics into a centralized warehouse.
  • Builds Extract, Load, Transform (ELT) processes for ingesting and transforming data from various sources into a unified format.
  • Proficient in big data technologies such as Hadoop ecosystem (HDFS, Hive, HBase), Apache Kafka, and Apache Flink. Leverages these technologies for large-scale data processing and real-time analytics.
  • Programs in languages such as Python, Scala, or Java. Writes efficient, maintainable code for data processing, analytics, and automation tasks.
  • Understands data governance principles, data privacy regulations (e.g., HIPAA, GDPR, CCPA), and best practices for ensuring data security and compliance.
  • Provides solutions that are forward-thinking in data and analytics.
  • Leads and mentors a team of both internal and contracted data engineers.
  • Translates technical concepts into non-technical terms and influences decision-making.
  • Identifies complex data engineering challenges and devises innovative solutions. Thinks critically and makes data-driven decisions to optimize processes and systems.
  • Stays updated with the latest advancements in data engineering, cloud technologies, and industry trends. Adapts to evolving technologies and business requirements.
  • Develop and implement quality controls and departmental standards to ensure quality standards, organizational expectations, and regulatory requirements. 
  • Contribute to the development and education plans on data engineering capabilities, systems, standards, and processes. 
  • Anticipate future demands of initiatives related to people, technology, budget and business within your department and design/implement solutions to meet these needs. 
  • Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.

Requirements

  • 10 years of experience with modern data engineering projects and practices: designing, building, and deploying scalable data pipelines with 5+ years of experience deploying cloud-based data infrastructure solutions.
  • Strong understanding and hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). This includes knowledge of cloud services like compute, storage, networking, and databases.
  • 3+ years of experience building data pipelines for AI/ ML models using PySpark or Python.
  • 4+ years of experience building data pipelines with modern tools such as Data Bricks, Fivetran, dbt etc.  including data processing using Apache Spark, Delta Lake, Unity Catalog, and MLflow.
  • Familiarity with lakehouse architecture and delta lake.
  • Bachelor's degree in Computer Science, Engineering, Statistics, Informatics, Information Systems or another quantitative field. Master’s degree preferred
  • Aligns with our values: People, Commitment, Aspiration, Trustworthiness & Impact 

Benefits

  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Life Insurance (Basic, Voluntary & AD&D)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Family Leave (Maternity, Paternity)
  • Short Term & Long Term Disability
  • Training & Development
  • Work From Home


Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture AWS Azure Big Data BigQuery Computer Science Databricks Data governance Data pipelines Data strategy Data warehouse dbt ELT Engineering FiveTran Flink GCP Google Cloud Hadoop HBase HDFS Java Kafka Machine Learning MLFlow ML models NoSQL Pipelines Privacy PySpark Python Redshift Scala Security Snowflake Spark Statistics

Perks/benefits: 401(k) matching Career development Health care Insurance Medical leave Parental leave

Regions: Remote/Anywhere North America
Country: United States
Job stats:  7  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.