MLOps / DataOps Engineer
About US
FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world.
The Opportunity
As DataOps / DevOps Engineer on our Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. We seek a highly skilled engineer with a strong foundation in digital product development, a zeal for innovation and responsible for deploying product updates, identifying production issues and implementing integrations. The engineer should excel in agile, fast-paced settings, be an advocate for DevOps and CI / CD methodologies, and prioritize customer-centric solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a team, leveraging skills to build solutions and drive innovation forward.”.
What You’ll Contribute
- Design, build, and maintain scalable, resilient data and ML pipelines, infrastructure, and workflows using tools such as GitHub Actions, ArgoCD, Crossplane, Terraform, Helm, and others.
- Automate infrastructure provisioning and configuration management using cloud-native services (preferably AWS) with tools like Terraform, CloudFormation, or Crossplane.
- Design, containerize, and manage Kubernetes (EKS) clusters and / or ECS environments in AWS. Collaborate with development teams to optimize performance, deployment, and cost.
- Partner with DevOps and SRE teams to ensure high availability, observability, scalability, and security of the data and ML infrastructure.
- Work closely with Data Scientists and ML Engineers to operationalize machine learning models, including building CI / CD pipelines for model training, validation, and deployment.
- Implement observability for data pipelines and ML services using tools like Prometheus, Grafana, Datadog, or similar.
- Develop and maintain automated pipelines for model retraining, monitoring drift, and versioning in production.
- Build and maintain ML deployment workflows using tools like MLflow, SageMaker, or Kubeflow.
- Support experimentation and prototyping in areas such as Machine Learning and Generative AI, transitioning successful prototypes into production systems.
- Ensure cloud infrastructure is secure, compliant, and cost-efficient, following best practices in governance, identity, and access management.
What We’re Seeking
5+ years of experience in DataOps, MLOps, or related fields, with at least 2 years focused on ML model operationalization and workflow automation.Proficient in AWS services including EC2, S3, IAM, ACM, Route 53, CloudWatch, EKS, and ECS.Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, and Helm.Familiarity with CI / CD for ML pipelines, GitOps practices, and tools like GitHub Actions, Jenkins, or Argo Workflows.Strong scripting and automation skills using Bash, Python, or GitHub workflows.Understanding of observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or OpenTelemetry).Experience with feature stores (e.g., Feast, Tecton) and knowledge of data mesh, lakehouse architectures, or modern data stack concepts.Comfort working with structured and unstructured data across various storage and pipeline systems.Solid understanding of security best practices for cloud and Kubernetes environments, including secrets management, identity & access control, and policy enforcement.Familiarity with data governance, lineage, and metadata management is a plus.Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams.A bachelor’s degree in Computer Science, Engineering, or a related discipline, or equivalent hands-on industry experience.Our Offer to You
An inclusive culture strongly reflecting our core values : Act Like an Owner, Delight Our Customers and Earn the Respect of Others.The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences.Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so.An engaging, people-first work environment offering work / life balance, employee resource groups, and social events to promote interaction and camaraderie.