Role : AI / ML Specialist (WFO)
Years of Experience - 2 to 10 : Period Preferred - Immediate Joiners to 15 :
- Fine-tune LLaMA models on domain-specific data (e.g., finance, healthcare, telecom, legal, etc.)
- Curate, clean, and preprocess datasets for supervised and unsupervised fine-tuning.
- Implement low-rank adaptation (LoRA), PEFT, QLoRA, or full fine-tuning as per need.
- Optimize model training performance using tools / libraries.
- Evaluate fine-tuned models using appropriate metrics.
- Deploy and integrate models with APIs, RAG pipelines, or inference servers.
- Use tools like Weights & Biases, LangSmith, or TensorBoard for training monitoring and logging.
- Conduct safety audits and hallucination checks :
- Familiarity with open-source LLMs beyond LLaMA (Mistral, Falcon, Mixtral, etc.).
- Hands-on with orchestration tools like LangChain, LangGraph, CrewAI, or Flowise.
- Knowledge of tokenizers, embeddings, and prompt templates.
- Experience with LLaMA (preferably LLaMA 2 / LLaMA 3) and Hugging Face ecosystem.
- Proficiency in Python, PyTorch, and model training workflows
(ref : hirist.tech)