Talent.com
This job offer is not available in your country.
AWS Data Engineer - Spark / PySpark / Python

AWS Data Engineer - Spark / PySpark / Python

Tekfortune IT India Pvt LtdPune
30+ days ago
Job description

Title : AWS Data Engineer (Contractual role)

Experience : 6 - 18 Years

Work Location : Pune / Bangalore

Notice Period : Immediate to 30 Days Max only

Job Summary :

  • Design, develop, test, deploy, and maintain large-scale data pipelines on a native AWS cloud deployment using native AWS services, such as AWS Glue and AWS Data Migration Service.
  • Requires 5 to 10 years of experience in data engineering on the AWS platform.
  • Proficiency in Spark / Pyspark / Python / SQL is essential.
  • Familiarity with AWS data stores including S3, RDS, DynamoDB, and AWS Data Lake, having utilized these technologies in previous projects.
  • Knowledge of AWS Services like Redshift, Kinesis Streaming, Glue, Iceberg, Lambda, Athena, S3, EC2, SQS, and SNS.
  • Understanding of monitoring and observability toolsets like CloudWatch and Tivoli Netcool.
  • Basic understanding of AWS networking components : VPC, SG, Subnets, Load Balancers.
  • Collaboration with cross-functional teams to gather technical requirements and deliver high-quality ETL solutions.
  • Strong AWS development experience for data ETL, pipeline, integration, and automation work.
  • Deep understanding of Data & Analytics Solution development lifecycle.
  • Proficient in CI / CD, Jenkins, capable of writing testing scripts and automating processes.
  • Experience with IaC Terraform or CloudFormation, basic knowledge of containers.
  • Familiarity with Bitbucket / Git and experience working in an agile / scrum team.
  • Experience in the Private Bank / Wealth Management domain.

ref : hirist.tech)

Create a job alert for this search

Aws Data Engineer • Pune