MLOps Developer
Role: MLOps Developer
Location: Hybrid / Remote
Team: AI & Innovation
Reports to: VP of Artificial Intelligence
Compensation: 28-32 LPA(Based on the experience and interview)
About BIG Language Solutions
BIG Language Solutions is a global Language Service Provider (LSP) delivering world-class translation and interpretation services for clients across industries. We combine human linguistic expertise with cutting-edge AI to make multilingual communication faster, more accurate, and more accessible. Our innovation spans both written and spoken language solutions—helping organizations break barriers in real time and at scale.
Job Summary
We are looking for an MLOps Developer to own the deployment, scaling, and reliability of machine learning systems in production. You will be responsible for building containerized ML services, operating CI/CD pipelines, and running ML workloads on Azure Kubernetes Service (AKS).
In this role, you’ll work closely with ML engineers and platform teams to take models from experimentation to high-performance, observable, and scalable production systems. This is a hands-on role for someone who enjoys working at the intersection of machine learning, cloud infrastructure, and distributed systems.
MLOps Developer — Must-Have Skills
- Docker & Containerization
- Strong experience writing and maintaining Dockerfiles for ML training and inference workloads
- CI/CD Pipelines
- Hands-on experience building and operating CI/CD pipelines for ML systems (model build, test, deploy, rollback)
- Azure Kubernetes Service (AKS)
- Production experience deploying, scaling, and operating ML services on AKS, including monitoring and troubleshooting
- MLOps & Model Lifecycle
- Experience operationalizing ML models end-to-end: training → deployment → monitoring
- Strong understanding of model versioning, promotion, and rollback
- Model Serving & Inference
- Experience with production inference pipelines and model serving
- Hands-on experience with NVIDIA Triton Inference Server
- Familiarity with ONNX, TensorRT, PyTorch, or TensorFlow
- Python & Systems
- Advanced Python skills for production ML systems
- Experience debugging performance issues across CPU/GPU, memory, and distributed systems
Nice-to-Have
- Kubernetes tooling (Helm, GitOps)
- CUDA / TensorRT optimization
- Feature stores or vector databases
- Streaming systems (Kafka, Redis, RabbitMQ)
What We’re Looking For
- Owns ML systems in production end to end
- Strong debugging and problem-solving mindset
- Comfortable working with ML, platform, and product teams
- Experience taking ML systems from prototype to production at scale
Think global. Think BIG.
Visit us:
: