WHAT IS YOUR ROLE:
As Boldr’s Data Engineer Lead, you will set up data pipelines architectures, pull data from different data sources and provide ready to use data sets for our analytics team. You would be in charge of the extract-transform-load (ETL) process not only for all of Boldr’s data, but some of our clients’ as well. You will also assure that all the databases that you handle are up to par in terms of performance, integrity and security. You will be supporting our growing analytics practice to create insights for our ever-expanding client base.
WHY DO WE WANT YOU
We are currently looking for impact-driven individuals who are passionate in helping Boldr grow and achieve our Purpose. We expect our Team to become our ultimate partners to success by always giving their 110% in everything, sharing their talents and quirks, and championing our core values: Curious, Dynamic and Authentic.
- Lead the design, development, and maintenance of an advanced data pipeline architecture.
- Expertly compile extensive and intricate datasets that align precisely with both functional and non- functional business requirements.
- Spearhead the identification, design, and execution of internal process enhancements, including automating manual tasks, optimizing data delivery, and reimagining infrastructure for enhanced scalability.
- Assume responsibility for overseeing and coordinating the activities and deliverables of the Data Engineering team, providing mentorship and guidance to junior team members.
- Conduct thorough reviews and proficiently manage source code using Git, ensuring high-quality codebase standards.
- Provide expertise in assisting Data Engineers in building the necessary infrastructure for efficient data extraction, transformation, and loading from a diverse range of data sources (e.g., Zendesk, Freshdesk, QuickBooks, Sprout, Kustomer, etc.).
- Author comprehensive database documentation, encompassing data standards, procedures, and definitions for the data dictionary (metadata), contributing to a well-structured data ecosystem.
- Administer access permissions and privileges across all managed databases, upholding robust security practices.
- Collaborate closely with stakeholders and fellow Data Engineers to resolve complex data-related technical challenges and fulfill their evolving data infrastructure needs, demonstrating strong cross-functional teamwork.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- +5 years of experience building and optimizing data pipelines, architectures, and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Intermediate project management and organizational skills.
- + 5 years of experience with data pipeline and workflow management tools, particularly using Airflow and AWS tools.
- Experience with ETL (Extract-Transform-Load), data integration, manipulation, transformation, and cleaning with scripting languages such as Python, Java, etc.
- Experience with AWS cloud services: Lambda, SNS, RDS, Redshift, API Gateway, S3, VPC, etc.
- Experience with Google Cloud Platform.
- Intermediate knowledgeable of data transfer, backup, recovery, security, integrity and SQL
- +5 years of experience with RESTful Services and APIs
- A general understanding of the Philippines Data Privacy Act
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, PHP, etc.
- +3 years programming experience
- Working knowledge with version control tool like Github
- AWS Solutions Architect (Associate) Certification is a must.
- AWS Solutions Architect (Professional) Certification is a plus.