This role is for one of the Weekday's clients
Min Experience : 5 years
Location : Gurgaon
JobType : full-time
We are seeking an experienced Senior Data Engineer with strong expertise in building and managing large-scale data pipelines within the AWS ecosystem. The ideal candidate will have a solid background in SQL, cloud-native data platforms, and orchestration frameworks, with a deep understanding of scalable data lake and warehouse architectures.
Requirements
Key Responsibilities
- Design, develop, and maintain robust, scalable data pipelines and ETL / ELT workflows .
- Leverage the AWS Data Stack (S3, Glue, Kinesis, Redshift, Data Lake) for data ingestion, transformation, and storage.
- Write, optimize, and manage complex SQL queries for analytics and reporting needs.
- Implement and manage data orchestration frameworks (e.g., Airflow, Step Functions) to automate and monitor workflows.
- Utilize Databricks for advanced data processing, transformation, and analytics.
- Ensure data quality, integrity, reliability, and security across the pipeline lifecycle.
- Collaborate with data scientists, analysts, and business stakeholders to deliver scalable data-driven solutions.
Required Skills & Qualifications
5–7 years of proven experience as a Data Engineer or in a similar role.Strong expertise in SQL development and performance optimization .Hands-on experience with AWS Data Stack (S3, Glue, Kinesis, Redshift, Data Lake).Proficiency in designing and managing orchestration workflows for data pipelines.Experience with Databricks for data engineering and transformation.Strong understanding of data modeling, warehousing concepts, and ETL / ELT best practices .Ability to thrive in a hybrid working model with cross-functional collaboration.Nice to Have
Experience with Python or Scala for data engineering tasks.Familiarity with CI / CD pipelines and DevOps practices for data workflows.Exposure to real-time streaming pipelines and advanced analytics use cases.Core Skills
PySpark
SQLDatabricksAWS