Key Responsibilities :
- Architect and implement scalable MLOps pipelines for model development, deployment, and monitoring
- Lead end-to-end operationalization of ML models using AWS SageMaker and AWS ecosystem
- Build and manage CI / CD pipelines for ML workflows using tools like GitHub Actions, Jenkins, or AWS CodePipeline
- Automate key model lifecycle stages including training, version control, deployment, and rollback
- Collaborate closely with data scientists, ML engineers, and DevOps for seamless model integration
- Monitor live models for performance degradation, data drift, and reliability issues
- Establish governance and best practices for reproducibility, security, and compliance in ML systems
Required Skills :
10+ years of experience in MLOps, ML engineering, or similar domainsProven hands-on expertise with AWS SageMaker, Lambda, S3, CloudWatch, and other AWS servicesStrong Python programming and experience with Docker, Kubernetes, and TerraformDeep understanding of infrastructure-as-code and CI / CD toolsFamiliarity with model monitoring frameworks such as Prometheus, Grafana, or EvidentlySolid grasp of ML algorithms, feature engineering, and production model deploymentPreferred Qualifications :
AWS Certified Machine Learning – Specialty or AWS DevOps Engineer certificationKnowledge of feature stores, model registries, and real-time inference platformsExperience leading cross-functional AI / ML engineering teamsSkills Required
Terraform, Docker, Kubernetes, Prometheus, Python, MLops