Job Title : Generative AI Engineer (LLM | Python | AWS | FastAPI) Location : Bangalore, India
5+ years of experience
Employment Type : Permanent
Budget : Up to ₹30 LPA
Notice Period : Immediate Joiners Preferred
About the Role We are seeking a highly skilled Generative AI Engineer with a strong foundation in Python, Large Language Models (LLMs), AWS , and FastAPI . In this role, you will design, develop, and deploy scalable AI-driven systems and GenAI solutions that push the boundaries of automation, intelligent APIs, and AI-assisted decision-making.
This position offers a unique opportunity to work on cutting-edge GenAI applications , integrate LLMs into production systems , and collaborate with cross-functional teams to create next-generation AI capabilities.
Key Responsibilities Design, fine-tune, and deploy Large Language Models (LLMs) for real-world use cases such as chatbots, text summarization, and knowledge retrieval.
Develop end-to-end AI pipelines using Python and FastAPI , ensuring performance, scalability, and maintainability.
Build and deploy API-driven GenAI services and integrate them into cloud-native environments (AWS preferred).
Leverage AWS services (Lambda, S3, EC2, SageMaker, API Gateway) for scalable AI model hosting and automation.
Collaborate with data scientists and MLOps engineers to improve model training, evaluation, and deployment pipelines.
Implement prompt engineering, retrieval-augmented generation (RAG) , and custom embeddings for enterprise-level AI applications.
Ensure data security, version control, and model governance throughout the AI lifecycle.
Conduct continuous performance optimization of AI systems and stay updated on the latest in Generative AI and LLM research .
Must-Have Skills Programming : Expert in Python (OOPs, Async, API integration).
Frameworks : FastAPI (must-have), Flask (good to have).
AI / ML : Hands-on experience with LLMs , Prompt Engineering , LangChain , or RAG pipelines .
Cloud : Proficiency in AWS (Lambda, SageMaker, EC2, S3, API Gateway).
MLOps : Experience with model deployment, Docker, CI / CD, and API-based inference.
Strong knowledge of NLP concepts , embeddings, and fine-tuning pre-trained transformer models (e.g., GPT, LLaMA, Falcon, Mistral).
Good to Have Experience with Vector Databases (FAISS, Pinecone, Weaviate, or Chroma).
Familiarity with OpenAI APIs , Hugging Face Transformers , and LangChain Framework .
Exposure to frontend AI integrations (Streamlit, Gradio, etc.) for demo or prototyping.
Understanding of Data Engineering workflows and API orchestration .
Generative Ai Engineer • Bengaluru, Karnataka, India