12–15 years of overall experience with 5–7 years in AI / ML and 3+ years in Generative AI / LLM architecture.
Strong hands-on experience with RAG pipelines, vector search, and semantic retrieval.
Proven experience integrating LLMs (OpenAI, Claude, Gemini, Mistral, etc.) using frameworks such as LangChain, LlamaIndex, or PromptFlow.
Deep understanding of MCP servers – configuration, context routing, memory management, and protocol-based interoperability.
Strong programming skills in Python, and familiarity with containerization (Docker, Kubernetes) and cloud AI services (Azure OpenAI, AWS Bedrock, GCP Vertex AI).