We’re looking for a Data Engineer to design, build, and scale modern data platforms on AWS . You’ll work with Python, Spark, DBT, and AWS-native services in an Agile environment to deliver scalable, secure, and high-performance data solutions.
What you’ll do
- Develop and optimize ETL / ELT pipelines with Python, DBT, and AWS services (Data Ops Live).
- Build and manage S3-based data lakes using modern data formats (Parquet, ORC, Iceberg).
- Deliver end-to-end data solutions with Glue, EMR, Lambda, Redshift, and Athena .
- Implement strong metadata, governance, and security using Glue Data Catalog, Lake Formation, IAM, and KMS.
- Orchestrate workflows with Airflow, Step Functions, or AWS-native tools .
- Ensure reliability and automation with CloudWatch, CloudTrail, CodePipeline, and Terraform.
- Collaborate with analysts and data scientists to deliver business insights in an Agile setting.
Required Skills & Experience
4–7 years of experience in data engineering , with 3+ years on AWS platformsStrong in Python (incl. AWS SDKs), DBT, SQL, and SparkProven expertise with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)Hands-on experience with workflow orchestration (Airflow / Step Functions)Familiarity with data lake formats (Parquet, ORC, Iceberg) and DevOps practices (Terraform, CI / CD)Solid understanding of data governance & security best practicesBonus
Exposure to Data Mesh principles and platforms like Data.WorldFamiliarity with H adoop / HDFS in hybrid or legacy environments