Job Title : Data Engineer
Location : Bengaluru (Onsite) / Pune (Onsite)
We are seeking a highly skilled Data Engineer with strong experience in PySpark, AWS Data Engineering, and Snowflake development.
Key Responsibilities
- Hands-on development using PySpark with strong understanding of RDDs, DataFrames, SparkSQL
- Build, test, and maintain applications on AWS Cloud
- Strong working experience with AWS Analytics stack – Glue, S3, Lambda, Lake Formation, Athena
- Design and implement scalable data transformation & storage using Snowflake
- Ingest data into Snowflake in formats such as Parquet, Iceberg, JSON, CSV
- Develop ELT pipelines using DBT with Snowflake
- Write complex SQL & PL / SQL programs
- Build reusable components using Snowflake + AWS services
- Work on minimum two large-scale project implementations
- Exposure to governance & lineage tools – Alation, Immuta (preferred)
- Hands-on with orchestration tools like Airflow or Snowflake Tasks
- Knowledge of Ab Initio is a plus
- Understand data warehouse, data mart concepts
- Engage stakeholders and convert requirements into technical deliverables
- Good analytical & interpersonal skills
Other Valued Skills
Strong stakeholder management & requirement gatheringAbility to understand cloud infrastructure & propose solutionsKnowledge of data movement strategy across cloud data platformsExperience with NoSQL is added advantage