Job Title : Senior Data Engineer
Experience : 3+ years
Location : Gurgaon / Pune / Bangalore
Skills : PySpark, SQL, Databricks, AWS.
Role Summary :
We are looking for 3–4 experienced Databricks Developers to support a fast-paced, high-impact data engineering initiative. The ideal candidates should have hands-on expertise in building scalable data pipelines using Databricks and AWS, along with strong SQL and Python skills.
Required Skill Set :
- 3–4 years of experience in Data Engineering
- Strong hands-on experience with Databricks (Notebooks, Jobs, Workflows)
- P roficiency in PySpark and SQL
- Familiarity with AWS services (S3, Glue, Lambda, etc.)
- Experience with CI / CD tools and version control (e.g., Git)
- Good understanding of Delta Lake and performance tuning
Key Responsibilities :
Design and develop robust ETL pipelines using Databricks (PySpark or SQL)Work with large-scale datasets in cloud environments (preferably AWS)Optimize data pipelines for performance and cost efficiencyIntegrate data from multiple structured and unstructured sourcesCollaborate with data architects, analysts, and business stakeholders to understand requirementsImplement data validation and quality checksMaintain proper documentation and version control for data workflows