Only Immediate Joinees
Responsibilities :
Skills – AWS, DE, Airflow
Key Responsibilities
Design, develop, and maintain data pipelines using AWS services such as Glue, Lambda, Step Functions, EMR, Kinesis, and S3.
Build and optimize data warehouses and data lakes on AWS (Redshift, Lake Formation).
Develop ETL / ELT jobs using PySpark, Spark, Python, and AWS native tools.
Implement best practices for data modeling, partitioning, performance tuning, and data lifecycle management.
Monitor and troubleshoot production pipelines ensuring high availability and reliability.
Collaborate with Data Architects, Analysts, and Business SMEs to translate requirements into technical solutions.
Ensure compliance with security, governance, and architecture standards.
Work with CI / CD tools for automation, deployment, and version control (CodePipeline, Git, CloudFormation / Terraform).
Required Skills & Experience
3–8 years of experience as a Data Engineer with strong AWS expertise.
Hands-on experience with :
AWS Glue (Jobs, Crawlers, Catalog)
Amazon Redshift / Redshift Spectrum
Amazon EMR / PySpark / Spark
AWS Lambda, S3, Athena, Kinesis
Strong proficiency in Python, PySpark, SQL, and data transformation techniques.
Experience with data lake, data warehouse, and streaming data architectures.
Solid understanding of cloud security, IAM, encryption, and networking basics.
Exposure to DevOps, CI / CD, and infrastructure-as-code tools.
Aws Data Engineer • Delhi, India