Location- Bangalore
Immediate joiner
Budget-20 LPA
Experience- 5 + Years
We are seeking an experienced Databricks Developer with strong SQL skills and hands-on experience using Microsoft Copilot (or equivalent AI-assisted development tools). The ideal candidate will be responsible for designing, developing, and optimizing data pipelines, supporting analytics workloads, and improving overall data engineering efficiency through automation and AI-driven solutions.
Key Responsibilities
Design and implement scalable data pipelines and ETL workflows using Azure Databricks or similar platforms.
Develop and optimize complex SQL queries , procedures, and performance tuning for large datasets.
Leverage Copilot or similar AI-based coding assistants to accelerate code generation, debugging, and documentation.
Integrate Databricks with Azure Data Lake, Synapse, and other cloud data services .
Collaborate with data scientists, analysts, and business stakeholders to deliver reliable data solutions.
Ensure data quality, governance, and security best practices.
Participate in code reviews and continuous improvement initiatives.
Required Skills and Experience
5–8 years of hands-on experience in Data Engineering or related roles.
Strong expertise in Databricks , PySpark , and SQL .
Experience with Azure or AWS cloud environments .
Familiarity with AI-assisted coding tools like Copilot , CodeWhisperer , or ChatGPT for coding .
Solid understanding of data modeling, performance tuning, and distributed computing concepts.
Experience integrating with data orchestration tools (e.g., Airflow, Azure Data Factory).
Excellent problem-solving and analytical skills.
Preferred Qualifications
Knowledge of Delta Lake , Unity Catalog , and Lakehouse architecture .
Exposure to CI / CD and DevOps practices in a data environment.
Familiarity with Python , Scala , or SQL-based transformations .
Strong communication and collaboration abilities in an agile team environment.
Developer • Salem, Tamil Nadu, India