Role : Senior Data Engineer (Python, PySpark, SQL, AWS)
Experience : 6–10 years
Location : Hyderabad
Work Mode : Hybrid (3 days / week in-office)
Join Time : Immediate
Domain : Healthcare / Life Sciences
Must-Have Technical Skills :
- Strong programming skills in Python and / or Scala
- Hands-on experience with Apache Spark for big data processing on AWS cloud
- Proficiency with AWS services such as S3, Glue, Redshift, EMR, Lambda
- Strong SQL skills for data transformation and analytics
- Experience in Infrastructure as Code (IaC) using Terraform
- Expertise in setting up and managing CI / CD pipelines with Jenkins
Responsibilities :
Design, build, and optimize scalable data pipelines on AWSImplement data ingestion, transformation, and integration solutions using Spark, AWS Glue, and SQLManage and optimize cloud storage and compute environmentsEnsure robust, automated deployments with Terraform and JenkinsCollaborate with cross-functional teams to deliver high-quality data productsNice to Have :
Prior experience in the Healthcare / Life Sciences domainFamiliarity with modern data lake and data mesh architecturesWhy Join Us?
Work on cutting-edge data engineering projects in healthcare analyticsHybrid work model for flexibility and collaborationOpportunity to grow in a fast-paced, innovation-driven environmentApply Now!
Send your updated resume to careers@sidinformation.com