Description :
Key Responsibilities :
- Lead the design, development, and optimization of large-scale data solutions using Azure Databricks, Azure Data Factory (ADF), and Azure Synapse Analytics.
- Architect and implement efficient ETL / ELT pipelines for ingesting, transforming, and processing structured and unstructured data.
- Develop scalable data processing frameworks using PySpark and Python within the Azure ecosystem.
- Collaborate with data scientists, analysts, and business stakeholders to deliver reliable, high-performance data solutions.
- Ensure best practices in data governance, performance tuning, and cost optimization across Azure services.
- Mentor and guide junior engineers and drive technical excellence within the data team.
Technical Skills & Requirements :
Primary Skills : Azure, Databricks, ADF, PySpark, PythonExperience :
1. 10+ years in Data Warehousing and ETL development
2. 5+ years of hands-on experience with Azure Databricks
3. Strong working knowledge of Azure Data Factory and Azure Synapse
4. Proven expertise in data architecture, pipeline orchestration, and performance optimization.
5. Solid understanding of CI / CD practices, version control (Git), and cloud-native deployment pipelines
(ref : hirist.tech)