Type : Contract (Initial term, extendable)
Experience : 4- 5 years
Start Date : Immediate Joiners :
- Develop and maintain scalable Python-based applications and data pipelines.
- Use Databricks for data engineering, big data processing, and transformations.
- Build and optimize ETL pipelines using PySpark.
- Optionally integrate workflows using UiPath or Azure Data Factory (ADF).
- Collaborate with client-side business and technical teams to understand and implement data-driven
solutions.
Optimize, troubleshoot, and scale existing code for performance improvements.Contribute to system architecture and participate in regular stand-ups or Skills :4- 5 years of solid Python development experience.Strong hands-on experience with Databricks and PySpark.Experience building large-scale, distributed data pipelines.Solid understanding of data engineering principles and performance tuning.Familiarity with both SQL and NoSQL databases.Experience working in cloud-based environments, preferably Azure or AWS.Proficient in Git, debugging, and version control workflows.Strong communication skills and ability to work independently in a client-facing to Have :Experience with workflow automation (UiPath, ADF).Exposure to AI / ML project environments.Basic knowledge of FastAPI or microservices.ref : hirist.tech)