Role Overview :
We are looking for an experienced Data Engineer (Azure) with strong expertise in building and managing scalable data pipelines and enterprise data warehouses (EDW).
The ideal candidate should have experience working on Azure-based data solutions, GCP, and modern Lakehouse architectures, with hands-on expertise in ETL / ELT, data modeling (Star schema, Type 2 SCDs), and data ingestion from diverse structured and semi-structured Responsibilities :
- Design, develop, and maintain data pipelines to ingest, transform, and load data from multiple sources (structured, semi-structured, and via APIs).
- Work on Azure EDW solutions, implementing Star Schema and handling Type 2 Slowly Changing Dimensions (SCDs).
- Develop scalable and optimized ETL / ELT processes using Azure Databricks, Azure Data Factory, and Azure Data Lake Gen2.
- Perform data analysis and validation using advanced SQL.
- Collaborate with business and technical teams to gather requirements and create BRDs, FRDs, and user stories.
- Ensure best practices in data quality, security, and governance.
- Work closely with cross-functional teams to support business intelligence, analytics, and reporting Skills :
Cloud & Data Engineering : Azure Databricks, Azure Data Factory, Azure Data Lake Gen 2, GCP
Programming : SQL (Expert level), Python (Intermediate)
Data Warehousing : Star schema, Type 2 SCDs, Data Modeling
APIs & Lakehouse Architectures : Ingesting structured & semi-structured data
Documentation : BRD, FRD, user story to Have :
Knowledge of other Azure Cloud Services (Synapse, Functions, Event Hub, etc.)Familiarity with performance optimization of data pipelines.Experience working in agile environmentsWhy Join Us?
Work on cutting-edge Azure & GCP-based data engineering projects.Opportunity to design enterprise-level data solutions in a collaborative environment.Competitive compensation and growth opportunities
(ref : hirist.tech)