Job Description
Task & Job Responsibilities-
- Design, develop, and maintain scalable infrastructure and platform components for ML and LLM solutions.
- Utilize containerization technologies (Docker / Podman) and orchestration tools (Kubernetes) to deploy and manage data and ML applications.
- Implement, manage and monitor scalable data, CI / CD and MLOps pipelines.
- Manage the entire lifecycle of ML and LLM models, from data preparation and model training to deployment and monitoring.
- Collaborate with data scientists, ML architects, software engineers, and other stakeholders to understand data and model requirements and deliver solutions.
- Apply best practices in MLOps and LLMOps to streamline model development, deployment, and management.
- Stay updated with the latest trends and advancements in data engineering, MLOps, and LLMOps.
- Up-to-date knowledge in Microsoft technologies (MS-Fabric, Data Lake etc..)
Profile-
New AI projects need to be staffed by an experienced engineer (10+ years’ experience in software or data engineering, +3 years’ experience in MLOps / AIOps engineering).Service ownership of LakeFS and some more technologies around DevOps platform.Needs to work on long term vision with MS technologies together (MS Fabric with power BI, AI and building concept with modern data lake.Check Your Resume for Match
Upload your resume and our tool will compare it to the requirements for this job like recruiters do.