Data Engineer (L4) – Python & AWS
Total Experience : 7–10 years
Location : Hyderabad
Job Overview
We are looking for a Senior Data Engineer with strong expertise in Python and AWS . You will design and build scalable data pipelines, manage cloud-based data platforms, ensure data quality, and support analytics / AI use cases across the organization.
Total Experience : 8–10 years
Key Responsibilities
Design and build scalable data pipelines using AWS Glue, Lambda, EMR, Step Functions , and Redshift
Develop Python-based ETL / ELT frameworks and reusable modules
Build and optimize data lakes (S3) and data warehouses (Redshift, Athena)
Integrate data from multiple sources (RDBMS, API, Kinesis / Kafka, SaaS)
Lead data modeling , partitioning, and performance tuning
Implement data quality, observability, and lineage practices
Ensure data security & governance (IAM, encryption, access control)
Support Analytics, Data Science, and ML teams
Set up CI / CD pipelines for data workflows
Provide technical leadership, code reviews, and mentorship
Monitor and troubleshoot data systems; drive performance & cost optimization
Required Skills
6–10 years experience in Data Engineering
Expert in Python (pandas, PySpark, boto3, SQLAlchemy)
Strong in AWS data services : Glue, Lambda, EMR, Step Functions, DynamoDB, Redshift, S3, Athena, Kinesis
Strong SQL and experience with data modeling
Hands-on with CI / CD , Git, and infrastructure automation (CloudFormation / Terraform)
Understanding of containerization (Docker / Kubernetes)
Excellent problem-solving and communication skills
⭐ Nice-to-Have
Experience with Spark / PySpark on EMR or Glue
Knowledge of Airflow, dbt, or Dagster
Experience with real-time streaming (Kafka / Kinesis)
Familiarity with Lake Formation, DataBrew, Glue Studio
Experience with ML platforms (SageMaker) or BI tools (QuickSight)
AWS certifications (Data Analytics or Solutions Architect)
Aws Data Engineer • Delhi, Delhi, India