Working at Atlassian
Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
Your future team
The Data Engineering team is responsible for building and managing data pipelines and data visualizations that powers analytics, machine learning, and AI across Atlassian, including finance, growth, product analysis, customer support, sales, marketing, and people functions. We maintain Atlassian's Enterprise data lake and build a creative, reliable, and scalable analytics data model that provides a unified way of analyzing our customers, our products and drive growth and innovations.
You'll be joining a team that is very smart and very direct. We ask hard questions and challenge each other to improve our work continually. We are self-driven yet collaborative. We're all about enabling growth by delivering the right data and insights in the right way to partners across the company.
What you'll do
- Partner with Product Manager, analytics, and business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions.
- Design and implement data pipelines to ETL data from multiple sources into a central data warehouse.
- Design and implement real-time data processing pipelines using Apache Spark Streaming.
- Improve data quality by leveraging internal tools/frameworks to automatically detect and mitigate data quality issues.
- Develop and implement data governance procedures to ensure data security, privacy, and compliance.
- Implement new technologies to improve data processing and analysis.
- A BE in Computer Science or equivalent with 5+ years of professional experience as a Data Engineer.
- Experience building scalable data pipelines in Spark using Airflow scheduler/executor framework or similar scheduling tools.
- Experience with Databricks and its APIs.
- Experience with modern databases (Redshift, Dynamo DB, Mongo DB, Postgres or similar) and data lakes.
- Proficient in one or more programming languages such as Python/Scala and rock-solid SQL skills.
- Champion automated builds and deployments using CICD tools like Bitbucket, Git
- Experience working with large-scale, high-performance data processing systems (batch and streaming)
Great to have, not mandatory
- Experience working for SAAS companies
- Experience with Machine Learning
- Committed code to open source projects
- Experience building self-service tooling and platforms
Our perks & benefits
Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more.
At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together.
We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines.
To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them.
To learn more about our culture and hiring process, visit go.atlassian.com/crh.