About the Role
We’re building the next generation of intelligent, autonomous AI systems and are seeking 2–3 experienced AI Engineers to join our product development team. You will play a key role in designing, building, and deploying AI agents and AI assistants that leverage Large Language Models (LLMs) , Retrieval-Augmented Generation (RAG) , and scalable backend systems using Python
If you love building things from scratch, experimenting fast, and scaling what works — this role is for you.
Key Responsibilities
- Design, develop, and deploy AI agents powered by cutting-edge LLMs (OpenAI, Anthropic, Mistral, Llama, etc.)
- Building end-to-end retrieval-augmented generation (RAG) pipelines from ingestion, chunking, embeddings, and hybrid vector search, ideally using OpenSearch or other leading technologies.
- Develop scalable Python microservices and APIs that support AI agent operations and LLM orchestration.
- Own data ingestion and storage workflows — manage relational (PostgreSQL) and vector data for efficient retrieval and context management
- Optimize agent reasoning and memory , improving accuracy, contextual continuity, and tool integrations
- Collaborate cross-functionally with PMs, and designers to define and deliver AI-driven product features end-to-end
- Implement monitoring, evaluation, and testing frameworks to measure model quality, latency, and reliability.
- Stay ahead of the curve on emerging frameworks and new model capabilities.
Required Skills
Bachelor’s or master’s degree in computer science, Artificial Intelligence, or related field.Strong hands-on experience building and deploying LLM-powered applicationsProven experience with AI agents, AI assistants, or conversational systemsSolid understanding of Retrieval-Augmented Generation (RAG) architectures and search pipelines.Strong proficiency in Python (FastAPI, Flask, or Django preferred)Experience with vector databases (e.g., Pinecone, Weaviate, FAISS, Milvus, pgvector, etc.)Proficient in PostgreSQL and relational schema designFamiliar with AI agent and LLM orchestration frameworks (LangChain, LlamaIndex, Autogen, CrewAI, etc.)Experience deploying AI systems to production (cloud, APIs, monitoring, scaling)Familiar with Docker, Git, and CI / CD workflowsProficiency in working with cloud platforms like AWS, Azure, or Google CloudExcellent problem-solving skills and analytical thinking.Strong communication skills to collaborate with cross-functional teams.Startup-oriented execution mindset , including :Strong customer focus and ability to translate user needs into AI-driven solutionsHigh level of ownership across the full product lifecycle, from design to deploymentAbility to iterate quickly , experiment, and adapt in fast-moving environmentsBias for action with comfort making decisions under uncertainty.Nice to Have
Experience with agent frameworks (e.g., LangGraph, LangChain, AutoGen, CrewAI)Familiarity with embedding models, re-ranking, and search relevance tuningExperience building internal or customer-facing enterprise search or knowledge assistant products