Data Engineer

Austin, United States

Applications have closed

Atlassian

Atlassian's team collaboration software like Jira, Confluence and Trello help teams organize, discuss, and complete shared work.

View company page

Working at Atlassian
Atlassian can hire people in any country where we have a legal entity. Assuming you have eligible working rights and a sufficient time zone overlap with your team, you can choose to work remotely or return to an office as they reopen (unless it’s necessary for your role to be performed in the office). Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
Atlassian is looking for a Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.
On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.
You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Spark and Airflow. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.
More about you:
As a data engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.

On your first day, we'll expect you to have:

  • At least 3 years of professional experience as a software engineer or data engineer
  • A BS in Computer Science or equivalent experience
  • Strong programming skills (some combination of Python, Java, and Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experienced building data pipelines and microservices
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

It's preferred, but not required, that you have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • A passion for building and running continuous integration pipelines.
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
Our perks & benefits
To support you at work and play, our perks and benefits include ample time off, an annual education budget, paid volunteer days, and so much more.
About Atlassian
The world’s best teams work better together with Atlassian. From medicine and space travel, to disaster response and pizza deliveries, Atlassian software products help teams all over the planet. At Atlassian, we're motivated by a common goal: to unleash the potential of every team.
We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines.
To learn more about our culture and hiring process, explore our Candidate Resource Hub.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow APIs AWS Big Data Computer Science Databricks Data pipelines Data Warehousing Engineering Kinesis Microservices Open Source Pipelines Python Scala Spark SQL Streaming

Perks/benefits: Career development Travel

Region: North America
Country: United States
Job stats:  4  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.