Role Summary
We are seeking an experienced Data Engineer to design, build, and optimise modern data pipelines and transformation workflows on cloud-based platforms. The role will focus on ingesting raw source data, structuring it into curated layers, and ensuring it is reliable, governed, and optimised for analytics and reporting.
Key Responsibilities
- Design, develop, and maintain data ingestion pipelines using cloud ETL services (e.g., AWS Glue, DMS, Azure Data Factory, or equivalent).
- Transform and integrate source system data into structured formats across landing, curated, and reporting layers.
- Collaborate with Data Architects to implement canonical data models and conformed dimensions.
- Rebuild or migrate existing transformation logic from legacy BI / ETL tools into modern data pipelines.
- Support migration of historical datasets into cloud storage and analytics layers.
- Implement logging, monitoring, and exception handling to ensure reliability and auditability of pipelines.
- Work with BI and application engineers to ensure dashboards and workflows operate effectively on the curated data layer.
- Participate in data validation and reconciliation exercises against legacy outputs.
- Contribute to UAT cycles and provide timely fixes for defects.
- Document pipelines, data lineage, and transformation logic for long-term maintainability.
Skills Required
Azure Data Factory, AWS Glue, Dms