You are invited to a role that embodies the fusion of technology and innovation. We seek a Data Platform Engineer who can construct robust data pipelines, integrate multiple platforms, and foster a culture of data-driven decision-making.
Our ideal candidate is a seasoned professional with a deep understanding of distributed computing, ETL development, and data modeling. You will work closely with stakeholders to clarify requirements and ensure alignment with business objectives.
- Collaborate with cross-functional teams during sprint planning sessions
- Design and implement technical solutions leveraging PySpark and Python for ETL development
- Integrate data platforms with other systems, including incident and monitoring tools
- Optimize existing ETL processes for improved performance and reliability
- Develop and maintain unit and integration tests to ensure quality
- Provide support to QA teammates during the acceptance process
- Troubleshoot production incidents as a third-line engineer
To be successful in this role, you will need :
A Bachelor's degree in IT or a related fieldMinimum 8 years of experience in IT / Data-related rolesAs a Data Platform Engineer, you will have the opportunity to develop your skills in :
PySpark for distributed computing and Python for ETL developmentAdvanced SQL skills for writing and optimizing complex queriesFamiliarity with ETL tools, processes, and data warehousing platformsSolid understanding of data modeling and dimensional modelingExperience with version control tools such as GitKnowledge of monitoring tools to track pipeline performanceAgile Methodologies and collaboration toolsStrong problem-solving and communication skills