About the Role
The Data Engineer (Pyspark) will play a crucial role in developing and maintaining data pipelines, ensuring data quality, and supporting data-driven decision-making across the organization.
Required Skills
- Pyspark
- Airflow
- DBT
- Snowflake
Responsibilities
Understand existing ETL code or legacy code to identify source to target mappingsBuild ingestion code based on source typePerform quality checksCleansing and transform based on requirements and enterprise standardsDeployment using Git and run the pipeline to AirflowQualifications
5 to 8 years of experience in data engineeringLocation of Requirement : BangalorePreferred Skills
AWS EMRInformaticaHive