Description :
- Manage and optimize Databricks platform (Workspaces, Jobs, Unity Catalog, Delta Lake)
- Implement the whole ML lifecycle : model training, versioning, deployment, monitoring, and retraining
- Track, manage, and govern ML experiments & models via MLflow
- Develop scalable data / ML pipelines with Python (pandas, scikit-learn, PyTorch / TensorFlow), PySpark & SQL
- Deploy and manage solutions on AWS (specifically Sagemaker); knowledge of Docker / Kubernetes required
- Design and drive deployment strategies (A / B testing, blue-green & canary deployments)
- Create CI / CD workflows for ML using Jenkins / GitHub Actions / GitLab CI
- Monitor data quality, performance, and drift using Databricks Lakehouse Monitoring; integrate SHAP / LIME for explainability
- Automate end-to-end processes : data validation, feature generation, model building & deployment
- Collaborate across Data Science, Engineering, DevOps, and Business teams
- Mentor juniors, create clear documentation, and contribute to standard operating procedures
Mandatory Skills :
Databricks (core)MLflow, End-to-end MLOps & ML lifecyclePython, PySpark, AWS Sagemaker, Docker / Kubernetes, CI / CD (Jenkins, GitHub Actions, GitLab CI)Requirements :
4 to 6 years experience in MLOps, Data Engineering, or AI / ML rolesStrong background in building, deploying & maintaining ML models at scale in the cloudLocation : Pune, Bangalore, Noida, Gurgaon
Looking for Immediate Joiners
(ref : hirist.tech)