Yum! Brands’ is hiring AI Engineers to help design and improve voice-based AI agents for Taco Bell drive-thru operations. These roles are perfect for early-career AI engineers or data scientists looking to expand their skills in LLM-based interaction design, speech system optimization, and production-quality prompt development.
You’ll collaborate closely with senior AI Engineers, MLEs, and QA to iterate on agent prompts, tune foundational models, and contribute to the overall agent experience for customers and employees.
Responsibilities
Prompt Engineering & Agent Design
Author and refine prompt instructions, chaining logic, and fallback strategies
Design and test multi-turn conversation flows aligned to Taco Bell brand voice
Build and maintain system personas and error handling routines
Model Tuning & Evaluation
Fine-tune LLMs, ASR models, and embedding systems under supervision
Assist in running experiments using LoRA, distillation, or pruning methods
Contribute to agent evaluation metrics, regression tracking, and A / B tests
Cross-Functional Collaboration
Work closely with MLEs on model integration and performance tuning
Partner with QA and PMs to improve agent usability, reliability, and task success rates
Help manage RAG components, context retrieval chains, and structured data inputs
Mandatory Skills
4-8 years of experience in AI Engineering, Data Science or ML-related roles
Proficiency in Python, SQL and AI frameworks (e.g., LangChain, HuggingFace, OpenAI APIs)
Hands-on experience in
LLM / NLP fine-tuning,
SFT, LoRA, QLoRA, PEFT frameworks
Strong experience with
RAG development
AWS proficiency
(S3, Lambda, API Gateway, ECS / EKS, possibly SageMaker)
Ability to convert models into
production-ready applications : -
API creation, Microservices, Dockers, CI / CD pipelines, Kubernetes
Experience in building
data / ML pipelines
for : transcripts, call logs & conversation data.
Comfortable working with
US engineering teams
(cross-timezone collaboration).
Preferred / good to have :
Exposure on
ASR / TTS outputs
(voice-to-text workflows)
Understanding of
Conversational AI KPIs
(containment, handoff / fallback, AHT impact, etc.)
Any experience with
real-time orchestration
(e.g., routing calls, streaming pipelines)
Familiarity with audio / voice analytics
Experience in deploying LLMs in cloud environments beyond AWS (optional).
Data Scientist • Jodhpur, Rajasthan, India