Note : If shortlisted, you will be invited for initial rounds on 8th November'25 (Saturday) in :
- Migrate and optimize existing data pipelines from Snowflake to Databricks.
- Develop and maintain efficient ETL workflows using PySpark and SQL.
- Design scalable and performance-optimized data processing solutions in Databricks.
- Troubleshoot and resolve data pipeline issues, ensuring accuracy and reliability.
- Work independently to analyze requirements, propose solutions, and implement them effectively.
- Collaborate with stakeholders to understand business requirements and ensure a seamless transition.
- Optimize Spark performance tuning for large-scale data processing.
- Maintain proper documentation for migrated pipelines and workflows.
Required Qualifications :
Proficiency in Python, SQL, and Apache Spark (PySpark preferred).Experience with Databricks and Snowflake, including pipeline development and optimization.Strong understanding of ETL processes, data modeling, and distributed computing.Ability to work independently and manage multiple tasks in a fast-paced environment.Hands-on experience with orchestration tools (e.g., Airflow, Databricks Workflows).Familiarity with cloud platforms (AWS) is a plus.(ref : hirist.tech)