About the Role :
We are looking for a Software Engineer who combines deep data engineering expertise with hands-on experience in Generative AI and Agentic AI system development on AWS Cloud.
This role is ideal for someone who can design, build, and deploy production-grade GenAI workflows integrating LLMs, vector databases, and orchestration frameworks—with the same rigor as a traditional data system.
Key Responsibilities :
Design and maintain data pipelines and AI data infrastructure on AWS (Glue, Lambda, S3, Redshift, Step Functions, Athena, etc.).
Develop and deploy LLM-based applications and Agentic AI workflows using frameworks like LangChain, LlamaIndex, or AutoGen.
Build RAG (Retrieval-Augmented Generation) pipelines using AWS services (S3 + Bedrock + SageMaker + OpenSearch / Vector DB).
Implement agentic reasoning, tool calling, and orchestration for multi-agent workflows.
Containerize and deploy AI services using Docker, ECS, or EKS, ensuring scalability, cost-efficiency, and observability.
Integrate AWS Bedrock, SageMaker, or OpenAI APIs with internal data systems and applications.
Set up monitoring, tracing, and model observability using AWS CloudWatch, X-Ray, or third-party LLMOps tools.
Collaborate with ML engineers, data scientists, and architects to take GenAI prototypes to production-ready deployments.
Required Skills & Experience :
6–10 years of total experience in Data Engineering with strong AWS background.
Proficiency in Pyspark with hands-on production grade experience.
Hands-on experience with GenAI solutions in real-world environments (not just demos or PoCs).
Working knowledge of Agentic AI frameworks (LangChain, LlamaIndex, AutoGen, or similar).
Good hands-on experience in Python
Cloud experience is must have, AWS is preferred.
Experience with RAG architecture, vector databases (OpenSearch, Pinecone, FAISS, Chroma, or Milvus), and embedding models.
Understanding of LLMOps, prompt lifecycle management, and performance monitoring.
Practical experience deploying workloads on AWS ECS / EKS, setting up CI / CD pipelines, and managing runtime performance.
Familiarity with IAM, VPC, Secrets Manager, and security best practices in cloud environments.
Nice to Have :
Experience with AWS Bedrock for model hosting or SageMaker for fine-tuning and evaluation.
Exposure to multi-agent architectures and autonomous task orchestration.
Contributions to open-source GenAI projects or internal AI platform initiatives.
For Quick Response- Interested Candidates can directly share their resume along with the details like Notice Period, Current CTC and Expected CTC at anubhav.pathania@impetus.com
Ai Engineer Ai • Guwahati, Assam, India