MLOps Engineer — Databricks
Client : A large global enterprise (name not disclosed)
Location : India
Work Model : 100% Remote
Contract : 6 months (initial) with possibility of extension
Start Date : ASAP
Engagement : Full-time / Long-term contract
Role Overview
We are seeking an experienced Databricks MLOps Developer to design, build, and manage scalable machine learning operations on the Databricks Lakehouse Platform. The role involves automating ML workflows, operationalizing models, enabling reproducible pipelines, and ensuring governance and monitoring across the ML lifecycle.
Key Responsibilities
1. Develop Scalable MLOps Pipelines
Build automated ML pipelines for training, validation, deployment, and batch / real-time inference.
Use Databricks Workflows, Jobs, Repos , and Delta Live Tables where applicable.
Implement distributed training and inference pipelines using MLflow + PySpark .
2. Model Lifecycle Management
Manage model versioning and promotion across dev → staging → production using MLflow Model Registry .
Create reproducible workflows for model packaging, deployment, and rollback.
3. CI / CD Integration
Build and integrate ML pipelines with CI / CD using Azure DevOps, GitHub Actions, or Jenkins .
Automate testing, validation, and deployment for ML artifacts, notebooks, and infrastructure.
4. Feature Engineering & Data Pipelines
Collaborate with Data Engineering teams to build optimized Delta Lake pipelines (Bronze / Silver / Gold architecture).
Implement feature engineering workflows and support feature reuse at scale.
5. Monitoring & Governance
Set up model monitoring for performance, drift, data quality, and lineage.
Use Databricks-native tools, MLflow metrics, and cloud monitoring services (Azure / AWS).
Ensure compliance through logging, auditing, permissions, and environment governance.
6. Cross-Functional Collaboration
Work closely with Data Scientists, Data Engineers, Cloud teams, and Product teams.
Document workflows, best practices, and MLOps reusable components.
Required Skills & Qualifications
Strong hands-on experience with Databricks (Workflows, Repos, Jobs, Compute)
Proficiency with MLflow (Tracking, Registry, Model Deployment)
Expertise in Delta Lake , PySpark, and distributed data pipelines
Solid programming skills in Python and SQL
Experience with CI / CD tools : Azure DevOps, GitHub Actions, Jenkins
Familiarity with cloud platforms : Azure, AWS, or GCP
Understanding of containerization (Docker) and orchestration (Kubernetes)
Background in ML model training, serving, and observability
Preferred Qualifications
Databricks certifications :
Databricks Certified Machine Learning Professional
Databricks Certified Data Engineer Associate / Professional
Experience with Unity Catalog for governance
Experience implementing feature stores
Knowledge of ML observability tools (WhyLabs, Monte Carlo, Arize AI, etc.)
Mlops Engineer • Pimpri, Maharashtra, India