Job Title : AWS Data engineer
Location : PAN India (Hybrid)
Experience : 5–12 Years (STRICTLY)
Employment Type : Permanent
Notice Period : Immediate Joiners / ≤ 30 Days Only
CTC :
- [5–8 yrs] – Up to 21 LPA
- [8–12 yrs] – Up to 26 LPA
About the Company
Our client is a global leader in digital transformation and IT services across 50+ countries. They specialize in cloud modernization, data engineering, consulting, managed services, and enterprise-grade digital solutions. Their focus is on enabling businesses to scale efficiently and move confidently into a data-driven future.
Job Description
Design, develop, and maintain scalable data pipelines using AWS Databricks and PySparkImplement ETL processes to ingest, transform, and load large datasetsCollaborate with data architects and business stakeholders to understand requirements and deliver robust solutionsOptimize data workflows for performance, scalability, and reliability in AWS cloud environmentsEnsure adherence to data quality, governance, and security best practicesMonitor, debug, and troubleshoot data pipeline issues to ensure operational stabilityStay updated with the latest trends in data engineering, cloud platforms, and automationProvide technical guidance and mentorship to junior engineersParticipate in Agile ceremonies and contribute to continuous improvement initiativesMandatory Skills
Strong hands-on experience with AWS Databricks and PySpark
Practical experience building data pipelines and ETL workflows
Strong proficiency in SQL (must-have)
Experience working with large datasets and distributed processing
Understanding of cloud architectures and DevOps best practices
Familiarity with Git, CI / CD, and Agile delivery models
Important Note (Please Read Before Applying)
Do NOT apply if :
You have less than 5 years or more than 12 years of total experienceYou do NOT have real-time experience in AWS Databricks + PySparkYou do not have strong SQL skillsYour notice period is more than 30 daysYou are looking for remote-only (role involves hybrid work mode)You are from unrelated backgrounds (testing-support only, non-data engineering roles, etc.)Apply ONLY if you meet ALL the above criteria. Irrelevant applications will not be processed.
Seniority LevelMid-Senior levelIndustryIT Services and IT ConsultingEmployment TypeFull-timeJob FunctionsInformation TechnologySkillsSQLAmazon Web Services (AWS)Extract, Transform, Load (ETL)EngineeringGitData EngineeringCloud ComputingSoftware DevelopmentHTMLTest AutomationScreening questions
Required qualifications
How many total years of hands-on experience do you have in Data Engineering? (Must be between 5–12 years)Ideal Answer : YesDo you have real-time project experience with AWS Databricks and PySpark? Yes / NoIdeal Answer : YesAre you confident with writing complex SQL queries and working with large datasets? Yes / NoIdeal Answer : YesHave you independently developed or contributed to data pipelines using Databricks + AWS? Yes / NoIdeal Answer : YesAre you comfortable working in a Hybrid model across PAN India locations? Yes / NoIdeal Answer : YesWhat is your current notice period? (Only Immediate / ≤ 30 days acceptable)Ideal Answer : Yes