About Reckonsys Tech Labs : -
Reckonsys is a boutique software product development services firm specialized in creating uncommon solutions for common problems by making use of the right technologies and best practices. Reckonsys works with Startup founders to build their MVPs and with existing enterprises to help solve interesting problems.
Established in 2015, over the last 10 years we have 65+ associates. Out of which 40 Products that we have built for our clients, 20 either got funded / acquired / have become profitable!!
Website Link : Link :
Clutch co :
We are seeking Python Developers who are hands-on with Generative AI (GenAI), Model Context Protocol (MCP), Agent-to-Agent (A2A) workflows, and Retrieval-Augmented Generation (RAG). This is not a research role—it’s about shipping production-grade AI systems that work reliably in the wild.
You will architect, implement, and optimize AI backends that combine Python engineering discipline with GenAI capabilities like tool orchestration, retrieval pipelines, and observability.
Key Responsibilities
- Python Engineering
- Generative AI & RAG
- MCP (Model Context Protocol)
- Agent-to-Agent (A2A) Workflows
- Production & Observability
Required Skills & Qualifications
3–7 years professional experience with Python (3.9+).Strong knowledge of OOP, async programming, and REST API design.Proven hands-on experience with RAG implementations and vector databases (Pinecone, Weaviate, FAISS, Qdrant, Milvus).Familiarity with MCP (Model Context Protocol) concepts and hands-on experience with MCP server implementations.Understanding of multi-agent workflows and orchestration libraries (LangGraph, AutoGen, CrewAI).Proficiency with FastAPI / Django for backend development.Comfort with Docker, GitHub Actions, CI / CD pipelines.Practical experience with cloud infrastructure (AWS / GCP / Azure).Add tracing, logging, and evaluation metrics (PromptFoo, LangSmith, Ragas).Optimize for latency, cost, and accuracy in real-world deployments.Deploy solutions using Docker, Kubernetes, and cloud platforms (AWS / GCP / Azure).Design and implement multi-agent orchestration (e.g., AutoGen, CrewAI, LangGraph).Build pipelines for agents to delegate tasks, exchange structured context, and collaborate.Add observability, replay, and guardrails to A2A interactions.Develop MCP servers to expose tools, resources, and APIs to LLMs.Work with FastMCP SDK and design proper tool / resource decorators.Ensure MCP servers follow best practices for discoverability, schema compliance, and security.Implement RAG pipelines : text preprocessing, embeddings, chunking strategies, retrieval, re-ranking, and evaluation.Integrate with LLM APIs (OpenAI, Anthropic, Gemini, Mistral) and open-source models (Llama, MPT, Falcon).Handle context-window optimization and fallback strategies for production workloads.Build clean, modular, and scalable Python codebases using FastAPI / Django.Implement APIs, microservices, and data pipelines to support AI use cases.Nice-to-Have
Exposure to AI observability & evaluation (LangSmith, PromptFoo, Ragas).Contributions to open-source AI / ML or MCP projects.Understanding of compliance / security frameworks (SOC-2, GDPR, HIPAA).Prior work with custom embeddings, fine-tuning, or LLMOps stacks.What We Offer
Opportunity to own core AI modules (MCP servers, RAG frameworks, A2A orchestration).End-to-end involvement from architecture → MVP → production rollout.A fast-moving, engineering-first culture where experimentation is encouraged.Competitive compensation, flexible work setup, and strong career growth.Location :
Bangalore (Hybrid) / Remote
Experience Level :
3 – 7 yearsCompensation :
Competitive, based on expertise