About the Role
We are looking for an experienced AI / ML Architect to lead the design and implementation of advanced Generative AI and RAG (Retrieval-Augmented Generation) solutions. The role combines hands-on architecture design, pre-sales engagement, and technical leadership across enterprise AI initiatives.
You will drive solutioning around LLMs, knowledge retrieval, and MCP-based multi-agent architectures, helping customers unlock business value from AI responsibly and at scale.
Key Responsibilities
- Architect and deliver enterprise-grade AI / ML & Generative AI solutions, including RAG pipelines, LLM integrations, and intelligent agents.
- Engage in pre-sales activities : collaborate with business development, present technical solutions, estimate effort, and support proposals / PoCs for prospects.
- Design knowledge retrieval layers using vector databases (FAISS, Pinecone, Milvus, Chroma, Weaviate).
- Develop document ingestion, embedding, and context-retrieval pipelines for unstructured and structured data.
- Architect and manage MCP (Model Context Protocol) servers for secure context exchange, multi-model orchestration, and agent-to-agent collaboration.
- Define LLMOps / MLOps best practices – CI / CD for models, prompt versioning, monitoring, and automated evaluation.
- Collaborate with pre-sales and business teams to shape AI solution proposals, PoCs, and client demos.
- Lead AI innovation initiatives and mentor technical teams on GenAI, RAG, and MCP frameworks.
- Ensure data privacy, compliance, and responsible AI across all deployments
- Work closely with ITS, TIC team to provide mentorship and guidance to AI developers
Required Skills & Experience
12–15 years of overall experience with 5–7 years in AI / ML and 3+ years in Generative AI / LLM architecture.Strong hands-on experience with RAG pipelines, vector search, and semantic retrieval.Proven experience integrating LLMs (OpenAI, Claude, Gemini, Mistral, etc.) using frameworks such as LangChain, LlamaIndex, or PromptFlow.Deep understanding of MCP servers – configuration, context routing, memory management, and protocol-based interoperability.Strong programming skills in Python, and familiarity with containerization (Docker, Kubernetes) and cloud AI services (Azure OpenAI, AWS Bedrock, GCP Vertex AI).Expertise in MLOps / LLMOps tools (MLflow, KubeFlow, LangSmith, Weights & Biases).Solid grounding in data engineering, pipelines, and orchestration tools (Airflow, Prefect).Excellent communication, client engagement, and technical presentation skillsProven track record of practice building or leadership in emerging technology domains.Preferred / Good to Have
Experience integrating MCP servers with LangChain agents or OpenAI’s MCP ecosystem for scalable orchestration.Knowledge of RAG evaluation frameworks (RAGAS, TruLens) and hallucination-reduction techniques.Experience with enterprise data connectors (SharePoint, Confluence, SQL / NoSQL, APIs).Familiarity with Knowledge Graphs and hybrid retrieval (symbolic + neural).Exposure to agentic frameworks like LangGraph, CrewAI, or AutoGen.Proven experience building or scaling AI / GenAI Centers of Excellence (CoEs).Why Join Dexian
Be part of a growing Enterprise AI practice shaping next-generation intelligent systems.Opportunity to architect cutting-edge GenAI and RAG solutions for global clients.Collaborative and innovation-driven culture with deep technical mentorship.