Data Engineer
India, Remote
Full Time / Contract
# of Positions - 3
Job Summary
The Data Engineer will be responsible for designing, developing, and optimizing scalable data pipelines and cloud-based data solutions. This role requires strong Python programming skills, expertise in ETL / ELT processes, and deep hands-on experience with AWS cloud services such as S3, Glue, Lambda, Redshift, Kinesis, and DynamoDB. The ideal candidate will excel in building serverless data architectures, designing efficient data models, and ensuring robust pipeline performance through monitoring and optimization. The position is remote within India and open for both full-time and contract engagements.
Key Skills
Data Engineering & Pipelines
ETL / ELT pipeline development using Python
Data extraction, transformation, and loading into data lakes / warehouses
Workflow orchestration and automation
Real-time and batch data processing
AWS Cloud Expertise
S3, Glue, Lambda, Redshift, Aurora
Kinesis (real-time streaming)
DynamoDB (NoSQL databases)
AWS serverless architecture design
Integration of AWS managed services for data workflows
Serverless & Automation
Building event-driven pipelines using AWS Lambda
Serverless compute optimization
Automated triggering and orchestration of data processes
Data Modeling
Designing schemas for relational databases (OLTP / OLAP)
NoSQL data modeling and storage optimization
Understanding of normalization, partitioning, indexing
Monitoring & Optimization
AWS monitoring tools (CloudWatch, CloudTrail)
Pipeline performance tuning
Cost optimization across cloud resources
Collaboration & Communication
Cross-functional collaboration with analysts, data scientists, and engineering teams
Strong problem-solving, documentation, and communication skills
Minimum Qualifications
Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field (preferred).
Strong hands-on experience with Python for data engineering tasks.
Deep understanding of AWS cloud services used for data storage, streaming, ETL, and serverless compute.
Experience designing and implementing ETL / ELT pipelines .
Solid understanding of relational and NoSQL data modeling concepts.
Professional Experience Requirements
Proven experience building scalable data pipelines using Python.
Experience extracting and transforming data from diverse sources into cloud-based environments.
Hands-on experience with AWS services such as S3, Glue ETL, Lambda, Redshift / Aurora, Kinesis, and DynamoDB.
Demonstrated ability to design and implement serverless architectures.
Experience creating event-driven workflows using Lambda and other AWS triggers.
Strong experience in monitoring, debugging, and optimizing cloud-based data workflows.
Experience collaborating across data science, analytics, and engineering teams to deliver complete data solutions.
Data Engineer • Amritsar, Punjab, India