Data Engineer

Remote, United States

Slack logo
Apply now Apply later

Posted 1 month ago

Slack is looking for a data engineer to join our Data Modeling & Architecture team. In this role, you will be working cross-functionally with business domain experts, analytics, and engineering teams to design and implement our Data Warehouse model. You will design, implement and scale data pipelines that transform billions of records into actionable data models that enable data insights.  

You will work on initiatives to formalize data governance and management practices, rationalize our information lifecycle and key company metrics. You will provide hands-on technical support to build trusted and reliable metrics.

You have strong technical skills, you are comfortable contributing to a nascent data ecosystem, and you can build a strong data foundation for the company. You are a self-starter, detail and quality oriented, and passionate about having a huge impact at Slack.

What you will be doing

  • You'll translate business requirements into data models that are easy to understand and used by different disciplines across the company
  • You'll design, implement and build pipelines that deliver data with measurable quality under the SLA
  • You'll partner with business domain experts, data analysts and engineering teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
  • You'll be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements
  • You'll own and document foundational company metrics with a clear definition and data lineage
  • You'll identify, document and promote best practices

What you should have

  • You have 2+ years of data engineering experience in the industry working in data architecture, data modeling, master data management, metadata management
  • You have recent accomplishments working with relational as well as NoSQL data stores, methods and approaches (logging, columnar, star and snowflake, dimensional modeling)
  • You have a proven track record in scaling and optimizing schemas, performance tuning SQL and ETL pipelines in OLAP and Data Warehouse environments
  • You have demonstrated skills with either Python or Java programming language
  • You are familiar with data governance frameworks, SDLC, and Agile methodology
  • You have excellent written and verbal communication and interpersonal skills, and ability to effectively collaborate with technical and business partners
  • You have hands-on experience with Big Data technologies (e.g Hadoop, Hive, Spark) is a big plus
  • You have a bachelor's degree in Computer Science, Engineering or a related field, or equivalent training, fellowship, or work experience

Slack has a positive, diverse, and supportive culture—we look for people who are curious, inventive, and work to be a little better every single day. In our work together we aim to be smart, humble, hardworking and, above all, collaborative. If this sounds like a good fit for you, why not say hello?

Slack is registered as an employer in many, but not all, states. If you are not located in or able to work from a state where Slack is registered, you will not be eligible for employment.

Slack is an Equal Opportunity Employer and participant in the U.S. Federal E-Verify program. Women, minorities, individuals with disabilities and protected veterans are encouraged to apply. Slack will consider qualified applicants with criminal histories in a manner consistent with the San Francisco Fair Chance Ordinance.


Job tags: Big Data Engineering ETL Hadoop Java NoSQL Python Security Spark SQL