Job Title : Azure + Databricks
Experience : 4 to 7 years
Location : Mumbai (only)
Job Summary :
As a Senior Associate, you will take ownership of designing and delivering complex data engineering solutions across cloud and on-premise environments. You’ll lead the technical delivery of ETL / ELT frameworks, data lakes, and data warehouses, leveraging Databricks, Azure, and modern data engineering best practices. You will also mentor junior team members and contribute to solution architecture.
Roles & Responsibilities :
- Lead design and development of end-to-end data pipelines using PySpark, SQL, and Azure Databricks.
- Build and optimize data lakes and data warehouse solutions.
- Implement data ingestion, transformation, and orchestration using tools like Azure Data Factory or Apache Airflow.
- Translate business requirements into technical specifications and ensure alignment with data strategy.
- Conduct code reviews, performance tuning, and pipeline optimization.
- Collaborate with cross-functional teams (data science, BI, and architecture) to enable advanced analytics.
- Mentor Associates and support delivery excellence through documentation and process improvements.
Skills Required :
4–7 years of experience in data engineering, with strong proficiency in PySpark, SQL, and Python.Expertise in Azure (Data Factory, Databricks, Synapse, ADLS) or other cloud platforms.Strong understanding of Data Warehouse, Data Lake, and ETL architecture.Proficiency in query optimization, data modelling, and performance tuning.Experience working with version control (Git) and CI / CD pipelines.Exposure to data governance, security, and metadata management.Knowledge of Snowflake, AWS Redshift, or BigQuery is a plus.Strong communication and client-interaction skills.Cloud certifications (Azure Data Engineer, Databricks) preferred.