We’re looking for
Data Engineer
to design, build, and scale modern data platforms on
AWS . You’ll work with
Python, Spark, DBT, and AWS-native services
in an Agile environment to deliver scalable, secure, and high-performance data solutions.
What you’ll do
Develop and optimize
ETL / ELT pipelines
with Python, DBT, and AWS services (Data Ops Live).
Build and manage
S3-based data lakes
using modern data formats (Parquet, ORC, Iceberg).
Deliver end-to-end data solutions with
Glue, EMR, Lambda, Redshift, and Athena .
Implement strong
metadata, governance, and security
using Glue Data Catalog, Lake Formation, IAM, and KMS.
Orchestrate workflows with
Airflow, Step Functions, or AWS-native tools .
Ensure
reliability and automation
with CloudWatch, CloudTrail, CodePipeline, and Terraform.
Collaborate with analysts and data scientists to deliver
business insights
in an Agile setting.
Required Skills & Experience
4–7 years of experience in
data engineering , with 3+ years on AWS platforms
Strong in
Python (incl. AWS SDKs), DBT, SQL, and Spark
Proven expertise with
AWS data stack
(S3, Glue, EMR, Redshift, Athena, Lambda)
Hands-on experience with
workflow orchestration
(Airflow / Step Functions)
Familiarity with
data lake formats
(Parquet, ORC, Iceberg) and
DevOps practices
(Terraform, CI / CD)
Solid understanding of
data governance & security
best practices
Bonus
Exposure to Data Mesh principles and platforms like Data.World
Familiarity with
H adoop / HDFS in hybrid or legacy environments
Data Engineer • Mohali, Punjab, India