Data Engineer – Azure / Microsoft Fabric
Role Summary
As a Data Engineer with 4–5 years of experience , you will design, implement, and maintain scalable, production‑grade data pipelines and data platform solutions. Your work will deliver reliable, high-quality data for analytics, reporting, ML / AI initiatives — enabling data-driven decision making across the business.
What You’ll Do
- Build, deploy and manage end‑to‑end ETL / ELT pipelines using Azure Data Factory (ADF) , Azure Databricks (ADB) / Microsoft Fabric .
- Ingest data from diverse sources, transform, clean and store it in Azure Data Lake / OneLake or Delta Lake / Lakehouse , enabling downstream analytics and reporting.
- Design and maintain robust data models, warehouse / lakehouse schemas , ensuring data integrity, reliability and performance.
- Write efficient data transformation and processing logic using PySpark, Python and / or Scala , optimizing for large‑scale data workloads.
- Collaborate with data scientists, analysts, product owners, and other stakeholders to understand data needs and deliver appropriate solutions.
- Implement data governance, quality, and security standards; ensure compliance and data reliability across data lifecycle.
- Apply version control and CI / CD practices to pipelines and infrastructure (e.g. Git / Azure DevOps), supporting seamless deployment, monitoring and maintenance.
- Troubleshoot, monitor, and optimize data pipelines to ensure performance, scalability and operational excellence.
Required Skills & Experience
4–5 years of hands-on experience as a Data Engineer working with Azure Data Factory (ADF) , Azure Databricks / Microsoft Fabric , and data‑lake / lakehouse storage (Azure Data Lake / OneLake / Delta Lake).Strong programming skills in PySpark, Python , and / or Scala ; ability to write clean, efficient, maintainable code for large‑scale data processing.Proficiency in data modeling, data‑warehouse / lakehouse architecture and schema design .Solid understanding of ETL / ELT patterns, orchestration, scheduling and data pipeline lifecycle.Experience with MS SQL (or equivalent relational databases / data stores) for data storage or warehousing.Familiarity with version control (Git) and CI / CD pipelines for data engineering workflows.Problem-solving mindset, with ability to work on large datasets and meet data reliability, performance, and quality requirements.Preferred / Nice‑to‑Have
Exposure to event-driven or real-time data ingestion and processing (e.g. using messaging or streaming services).Familiarity with serverless or micro‑service style Azure components (e.g. Azure Functions, Logic Apps, Event Hub / Service Bus).Basic knowledge of reporting / BI tools (e.g. Power BI) — to aid end-to-end data-to-insight workflows.Experience with infrastructure‑as‑code / cloud‑infrastructure provisioning tools (e.g. Terraform / Bicep) and metadata / governance tools or processes.Experience or interest in building data governance, cataloging, lineage, and compliance standards.Why You Might Be a Great Fit
You enjoy working with large-scale data, solving complex data‑architecture challenges, and building data platforms that scale.You value writing clean, maintainable code and building pipelines that are robust, efficient, and reliable.You appreciate collaboration — working with analysts, data scientists, product teams to turn raw data into actionable insights.You stay updated on cloud-native data technologies, big data patterns, and best practices, and you take initiative to learn and implement new technologies as appropriate.Bonus / Preferred Qualifications
Certifications such as Azure Data Engineer Associate (or equivalent) or other Azure / cloud-data certificationsPrior experience working in agile / scrum teams or global delivery environmentsExposure to data governance, compliance, and security practices in enterprise data environmentsReady to make an impact with data? Apply now!