1. Prayag.Ai – Introduction Prayag.Ai is a fast-rising, AI-native startup with a projected turnover of ₹5 crore, founded by visionary technologists and first-principle thinkers committed to transforming intelligence into real-world impact. Though only six months old, Prayag has already delivered multiple domain-aware, production-grade AI systems that solve complex industry and societal problems.
Guided by senior advisors with decades of experience in enterprise software and scientific computing, Prayag combines youthful innovation with seasoned mentorship. Its agile team builds multimodal AI agents, document-intelligence pipelines, workflow copilots, and decision-support platforms using cutting-edge AI frameworks.
Prayag has rapidly executed intelligent AI pipelines that ingest, clean, organize, and analyze large-scale structured and unstructured datasets. These pipelines power domain-specific models that support decision-making across finance, climate science, public health, and governance. Notable solutions include anomaly-detection engines for financial audits, platforms that convert business plans into investor dashboards, knowledge-graph systems linking documents and codebases, multilingual voice-intelligence tools, and meeting-intelligence engines.
Prayag has built large-scale patent analytics frameworks, trade-analysis systems, and AI engines that assimilate physical models with real-time data—supporting applications such as rainfall forecasting and public-health monitoring. All solutions are built on transparent, auditable data lakehouses with integrated MLOps pipelines for training, validation, shadow testing, and retraining.
As an industry partner, Prayag.Ai blends data engineering excellence, applied AI research, and rapid prototyping to deliver production-grade system. With proven deployments across banking, healthcare, education, real estate, and the public sector, Prayag.Ai is positioned as an agile co-development partner for high-impact scientific and industrial initiatives.
Role : AI Engineer – Generative AI & Machine Learning Location : Chennai
Experience : 3+ Years
Team : Works jointly with Data Scientists and Software Engineering Teams
Project : Development of enterprise-grade Generative AI and ML solutions
Role Overview You will serve as the AI / ML specialist in a cross-functional team building large-scale generative AI and machine learning solutions for enterprise applications. Your expertise in LLMs, embeddings, model fine-tuning, and scalable ML systems will shape the core intelligence across multiple product lines.
You will be responsible for designing, training, and optimizing AI models, integrating them into applications, and ensuring they perform reliably in real-world scenarios. While domain experts guide business requirements, you will lead the full technical execution of AI / ML pipelines.
Key Responsibilities Generative AI Development & Model Engineering Build and optimize LLM-based systems using open-source models (Llama, Mistral, Qwen, etc.).
Perform fine-tuning, LoRA, RAG optimization, prompt engineering, and model alignment.
Develop custom pipelines for :
text generation
summarization
semantic search
document intelligence
agent-based workflows
code generation & automation
Integrate external APIs like OpenAI, Anthropic, Azure OpenAI, and HuggingFace models.
Machine Learning & Predictive Modelling Design and train supervised and unsupervised ML models for classification, regression, and clustering.
Build scalable pipelines for data cleaning, vectorization, feature engineering, and model evaluation.
Implement deep learning architectures (Transformers, CNNs, RNNs) for domain-specific business problems.
Optimize models for latency, accuracy, and cost for production environments.
RAG & Knowledge Engineering Build Search + GenAI pipelines using vector databases (FAISS, Pinecone, Chroma, Weaviate).
Design chunking strategies, embedding pipelines, and retrieval tuning.
Implement guardrails, grounding techniques, and hallucination-reduction workflows.
Enable domain-specific knowledge integration into LLMs.
MLOps & Deployment Develop reproducible ML training pipelines using MLflow, Weights & Biases, or equivalent tools.
Containerize models with Docker and deploy on AWS / GCP / Azure.
Optimize inference using quantization, distillation, and GPU acceleration.
Ensure monitoring, logging, and model governance in production.
Required Qualifications B.Tech / M.Tech in Computer Science, AI / ML, Data Science , or related fields.
3+ years of hands-on experience building and deploying ML and GenAI solutions.
Strong proficiency in Python , PyTorch / TensorFlow , and modern ML frameworks.
Experience working with LLMs, embeddings, transformers, and vector databases.
Solid understanding of ML fundamentals, evaluation metrics, and model lifecycle management.
Preferred Skills Experience with LangChain, LlamaIndex, or other orchestration frameworks.
Familiarity with cloud AI stacks (AWS Sagemaker, Azure AI, GCP Vertex).
Exposure to toolings like FastAPI, Kafka, Redis, Elasticsearch.
Understanding of responsible AI : bias mitigation, alignment, safety best practices.
Experience with multimodal models (vision-language, speech models) is a plus.
What You Will Gain Opportunity to build cutting-edge GenAI products used by enterprises.
Experience working on end-to-end AI systems from ideation to production deployment.
Collaboration with domain experts across industries to solve real-world problems with AI.
Deep exposure to the latest innovations in LLMs, multimodal AI, and enterprise automation.
Ai Engineer • Ambattur, Tamil Nadu, India