You will take ownership of our data infrastructure, ensuring its scalability, reliability, and
efficiency. You will collaborate with cross-functional teams to build robust data pipelines,
optimize data workflows, and ensure the availability and integrity of our data. You will drive best
practices, and contribute to the overall success of our data engineering efforts.
What You’ll be Doing
Design, develop, and maintain scalable and efficient data pipelines, ETL processes, and
data integration solutions.
Collaborate with data scientists, analysts, and other stakeholders to understand data
requirements and ensure data availability, quality, and reliability.
Optimize data workflows and processing systems for performance and scalability,
leveraging distributed computing and parallel processing techniques.
Implement and manage data storage and retrieval systems, ensuring efficient data
access and retrieval times.
Monitor and ensure the integrity and security of data, implementing data governance and
data protection measures.
Collaborate with cross-functional teams to define data architecture and infrastructure
requirements, ensuring scalability and performance.
Identify and recommend opportunities for data infrastructure improvements, automation,
and process enhancements.
Stay up-to-date with emerging technologies, tools, and trends in data engineering,
evaluating and recommending relevant solutions.
Collaborate with DevOps and Engineering teams to implement and maintain data
infrastructure and deployment processes.
Participate in code reviews, ensuring adherence to coding standards, best practices, and
data engineering principles.
Collaborate with stakeholders to define and implement data governance policies,
ensuring compliance with regulatory requirements.
Document and communicate data engineering processes, solutions, and best practices
to technical and non-technical stakeholders.
Collaborate with external partners and vendors as needed for data integration and
What You will need for this position
Proven experience as a Data Engineer, with a focus on designing and implementing
scalable data infrastructure and pipelines.
Experience with Databricks is strongly desired.
Strong technical skills in data engineering, ETL, data pipelines, and data integration
Proficiency in programming languages such as Python, Java, or Scala.
Experience with big data technologies such as Hadoop, Spark, or Snowflake.
Strong understanding of data modeling, database design principles, and SQL.
Familiarity with cloud-based data platforms (e.g., AWS, Azure, GCP) and related
Knowledge of data governance, data security, and data protection best practices.
Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
Excellent problem-solving and analytical skills, with a focus on continuous improvement
Strong communication and collaboration abilities, with the ability to effectively
communicate technical concepts to non-technical stakeholders.
Familiarity with Agile methodologies and experience working in an Agile development
Who You'll Be Working with:
Working in Data team, and closely working with our Engineering Team.