About the Role
We are hiring talented and motivated engineers to join our LLM and Agentic AI Engineering team .
In this role, you will work across the full lifecycle of modern AI systems —from hands-on programming and system design to prompt engineering, agent orchestration, evaluation pipelines, and enterprise deployment.
You will help build production-grade agentic AI applications that go beyond static copilots—systems that use tools, maintain memory, are continuously evaluated, and integrate deeply with enterprise workflows.
This role is ideal for engineers who actively use next-generation AI developer tools (e.g., Codex-style code agents, Cursor-like IDE copilots, Google’s agent workbench frameworks such as Anti-Gravity) and want to shape how these tools are operationalized in real production systems.
Key Responsibilities
🔹 Programming & Software Engineering (Core Responsibility)
- Write clean, maintainable, and production-quality code in Python and related backend / frontend technologies.
- Design and implement modular services, libraries, and APIs supporting LLM and agentic workflows.
- Build scalable backend components for agent execution, memory management, retrieval, and evaluation.
- Follow software engineering best practices including code reviews, testing, documentation, and CI / CD .
🔹 LLM Integration, Prompting & Developer Tooling
Design, test, and operationalize LLM-powered workflows using modern AI developer tools and workbenches.Develop robust system prompts, task schemas, and tool interfaces optimized for reliability and repeatability.Evaluate foundation models, prompting strategies, and tool-use patterns using structured AI workbench environments.🔹 Agentic Systems & Tool Orchestration
Build agentic workflows capable of planning, reasoning, tool invocation, and multi-step task execution.Integrate agents with internal APIs, databases, codebases, and enterprise systems .Design stateful agents with explicit control over memory, retries, tool boundaries, and failure modes.🔹 Retrieval, Memory & Knowledge Systems
Implement RAG pipelines using vector databases (ElasticSearch, FAISS, etc.) and hybrid retrieval approaches.Design contextual memory layers (episodic, semantic, task-level) to support long-running and adaptive agents.Optimize grounding strategies to reduce hallucinations and improve factual consistency.🔹 Data & AI Pipelines
Build pipelines to collect, clean, structure, and version datasets used for prompting, retrieval, and evaluation.Incorporate human feedback and production signals to iteratively improve agent behavior.🔹 Evaluation, Safety & Observability
Implement continuous evaluation frameworks covering task success, reliability, drift, and failure patterns.Instrument agents to capture prompts, tool calls, intermediate steps, and outputs for traceability and audit.Contribute to governance-aligned practices such as risk scoring, reproducibility, and audit readiness.🔹 Collaboration & Delivery
Work closely with senior engineers, product managers, and domain experts to translate real business workflows into deployed agentic systems.Participate in design reviews, agent behavior analysis, and iteration cycles .🔹 Continuous Learning
Stay current with evolving AI developer tooling, agent frameworks, and evaluation platforms across OpenAI, Google, and open-source ecosystems.Track emerging best practices in enterprise AI governance (e.g., NIST AI RMF, EU AI Act).Qualifications
✅ Required
3+ years of hands-on professional experience in software engineering, AI / ML systems, or backend / platform development.B.Tech / M.Tech in Computer Science, AI / ML, Data Science , or a related discipline from a reputed institute.Strong fundamentals in software engineering, machine learning, NLP, and Python-based development .Hands-on experience using AI-assisted development tools (Codex-style agents, Cursor, or similar IDE copilots).Familiarity with embeddings, vector search, and retrieval-based systems .Ability to translate ambiguous problem statements into working, production-ready code .Preferred / Bonus
Experience building or extending agent frameworks and tool-based workflows .Exposure to evaluation harnesses, prompt testing frameworks, or AI workbench platforms .Understanding of AI safety, observability, and governance considerations.Experience with containerized or API-driven deployments (Docker, REST, CI / CD).Frontend exposure (React or similar) for building internal AI tooling or dashboards.Job Details
Job Type : Full-timeBenefits : Health InsuranceWork Location : In-person