Job Description :
Minimum 8 to 11 years of hands-on experience with Databricks platform and Azure Data Factory.
Expertise in building and optimizing ELT / ETL pipelines using Databricks and Azure Data Factory.
Strong knowledge of Azure Data Factory for orchestration of data workflows and integration.
Experience with Databricks Delta Lake implementation, cluster configuration, and performance tuning.
Proficient in handling structured and semi-structured data using SQL, PySpark, or similar technologies.
Skilled in designing scalable data lake architectures leveraging Azure Data Factory and Databricks.
Ability to migrate on-premise data sources to cloud platforms and automate pipelines using CI / CD practices.
Familiarity with data governance, data lineage, and security best practices within Azure environments.
Strong analytical skills to translate business requirements into technical specifications for data solutions.
Good communication skills to collaborate with stakeholders and technical teams.
Roles and Responsibilities :
Lead the design, development, and deployment of data pipelines using Databricks and Azure Data Factory.
Configure and optimize Databricks clusters and workspaces to ensure high performance and cost efficiency.
Develop and maintain ELT workflows to process large volumes of data from diverse sources.
Monitor and troubleshoot data pipeline issues to ensure data quality and timely delivery.
Implement security controls and access management in Databricks and Azure Data Factory.
Collaborate with business analysts and architects to understand requirements and provide technical solutions.
Mentor junior team members on Databricks and Azure Data Factory best practices and tools.
Participate in code reviews, design discussions, and documentation to maintain high standards.
Estimate effort and contribute to project planning and delivery in agile environments.
Ensure compliance with organizational policies and industry standards for data management.
Azure • Bellary, IN