Total Exp - 5-10 yrs
Location - Pune, Bangalore, Noida, Gurgaon, Hyderabad
Notice Period - 0-15 days
Key Responsibilities :
- The ideal candidate will have strong expertise in Snowflake, Hadoop ecosystem, PySpark, and SQL, and will play a key role in enabling data-driven decision-making across the organization.
- Design, develop, and optimize robust data pipelines using PySpark and SQL.
- Implement and manage data warehousing solutions using Snowflake.
- Work with large-scale data processing frameworks within the Hadoop ecosystem.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
- Ensure data quality, integrity, and governance across all data platforms.
- Monitor and troubleshoot data pipeline performance and reliability.
- Automate data workflows and implement best practices for data engineering.
Required Qualifications :
5+ years of experience in data engineering or related roles.Data Engineering, Snowflake, Data Pipeline, AirflowDesign, develop, and maintain robust data pipelines and ETL workflowsHands-on experience with Azure Data Services (Data Factory, Blob Storage, Synapse, etc.)