Description
Responsibilities :
- Collaborate with stakeholders to understand business requirements and data needs, and translate them into scalable and efficient data engineering solutions using AWS Data Services.
- Design, develop, and maintain data pipelines using AWS serverless technologies such as Glue, S3, Lambda, DynamoDB, Athena, and RedShift.
- Implement data modelling techniques to optimize data storage and retrieval processes.
- Develop and deploy data processing and transformation frameworks to support both real-time and batch processing requirements.
- Ensure data pipelines are scalable, reliable, and performant to handle large-scale data sizes.
- Implement data documentation and observability tools and practices to monitor and troubleshoot data pipeline performance issues.
- Adhere to privacy and security development best practices to ensure data integrity and compliance with regulatory requirements.
- Collaborate with the DevOps team to automate deployment processes using AWS Code Pipeline.
Requirements :
Bachelor's degree in Computer Science, Engineering, or a related field.5+ years of experience working in data modelling and building real-time and batch processing data pipelines for large-scale data sizes.Strong proficiency in Python programming language.Extensive experience with AWS serverless or managed services such as S3, Glue, EMR, Lambda, DynamoDB, Athena, and RedShift.Solid understanding of privacy and security development best practices.Excellent problem-solving skills and ability to troubleshoot complex data pipeline issues.Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.Experience with Agile development methodologies is a plus.Location :
IN-GJ-Ahmedabad, India-Ognaj (eInfochips)
Time Type : Full time
Job Category : Engineering Services