Babblebots is running a šš šš¼šÆ šš²šš to help startups to hire AI Engineers / AI Interns.
These roles involve building production-grade software using cutting-edge AI and ML technologies across the full AI lifecycle, with work spanning traditional ML and computer vision as well as modern Generative AI (GenAI) and Large Language Models (LLMs).
Startup details -
A. Technology Innovation Hub (TiH, IIT Bombay)
B. FloSync
C. CARMO Technologies (SINE, IIT Bombay)
D. LXME
Many Moreā¦
Educational Qualification -
Bachelorās or Masterās degree in Computer Science, Electronics & Communications, Data Science, AI / ML, or a related field.
Experience - 0-4 years
Core Skills
- Strong Python skills and solid understanding of data structures and algorithms.
- Experience building APIs / services using FastAPI (or similar) and developing internal tools or dashboards.
Hands-on Experience With (any subset)
ML frameworks : TensorFlow, PyTorch, HF Transformers, NeMo.AI application / LLM orchestration : DSPy, LangChain, LangGraph, CrewAI, LlamaIndex.GPU / TPU training & inference : vLLM.Distributed training : SLURM, Ray, PyTorch DDP, NCCL.Data tools : Dask, Milvus, Apache Spark, NumPy.RAG workflows : chunking, embeddings, vector databases (Pinecone, Weaviate, Milvus).Agent protocols : MCP, A2A, ACP.Key Responsibilities
Build, fine-tune, and deploy LLMs, GenAI models, and multimodal systems (text, speech, vision).Develop ML / AI algorithms using traditional ML and deep learning;design and validate architectures including CNNs, ResNet, EfficientNet, YOLO, U-Net, Mask R-CNN, ViT / Swin.
Manage data workflows : EDA, preprocessing, feature engineering, data curation, and end-to-end ETL / pipeline setup for training and retraining.Implement MLOps practices including model versioning, experiment tracking, monitoring, and CI / CD.Deploy models to production via APIs / microservices;write clean, scalable, production-ready code for training and inference.
Design robust AI solution architectures and optimize performance using quantization, pruning, distributed training, and GPU / TPU acceleration.Build and optimize LLM / agent workflows using LangChain, LangGraph, CrewAI, LlamaIndex, and develop RAG pipelines and prompting strategies.Maintain clear documentation and work closely with cross-functional teams to deliver end-to-end AI solutions.Good to Have :
Basic knowledge of Docker, Git, and MLOps (model versioning, experiment tracking, monitoring, CI / CD for ML).Familiarity with AWS, GCP, or Azure (compute, storage, and AI / ML services).Understanding of ETL processes, data pipelines, and statistical validation methods.Ability to profile and optimize model performance (latency, throughput, cost).