Job Description – Data Engineer (AWS + PySpark)
Location : Pune / Hyderabad / Indore (Onsite)
Experience : 4 – 7 Years
Joining : Immediate
Role Overview
We are seeking a Data Engineer (AWS + PySpark) to join our Team . The role involves designing and building scalable data pipelines, working with large datasets, and enabling advanced analytics on cloud platforms. The ideal candidate should have strong expertise in AWS, PySpark, and modern data engineering practices.
Key Responsibilities
Design, develop, and maintain data pipelines and ETL workflows using AWS services and PySpark.
Build and optimize data lake and data warehouse solutions on AWS.
Work with structured and unstructured data, ensuring data quality, security, and governance.
Collaborate with business and analytics teams to deliver reliable and scalable data solutions.
Troubleshoot and optimize performance of data workflows.
Support real-time and batch data processing solutions.
Document processes, data flows, and technical solutions.
Requirements
4 – 7 years of professional experience as a Data Engineer.
Strong hands-on experience in AWS services (S3, Glue, EMR, Redshift, Lambda, Athena, etc.).
Expertise in PySpark for big data processing and transformation.
Strong proficiency in SQL for data analysis, profiling, and query optimization.
Experience with ETL / ELT tools and data pipeline orchestration.
Knowledge of data modelling, warehousing, and data governance best practices.
Exposure to CI / CD pipelines, Git, and Agile methodologies.
Excellent problem-solving and communication skills.
BFSI / Capital Markets domain experience is a plus.
Thanks & Regards ,
Somesh Singh
TCS AI. Cloud Recruiter
Talent Acquisition Group
Website : : http : / / www.tcs.com
E-Mail : - somesh.singh7@tcs.com
LinkedIn : - linkdin.com / in / mrsomeshsingh
To Register for Jobs, Visit : https : / / ibegin.tcs.com / iBegin / register
Address : - Tower-2, Okaya Centre Plot No. B-5,Sector - 62 Noida.
Aws Data Engineer • Pune, Maharashtra, India