Join Nanonets to push the boundaries of what's possible with deep learning. We're not just implementing models – we're setting new benchmarks in document AI, with our open-source models achieving nearly 1 million downloads on Hugging Face and recognition from global AI leaders.
Backed by $40M+ in total funding including our recent $29M Series B from Accel, alongside Elevation Capital and Y Combinator, we're scaling our deep learning capabilities to serve enterprise clients including Toyota, Boston Scientific, and Bill.com. You'll work on genuinely challenging problems at the intersection of computer vision, NLP, and generative AI.
Here's a quick 1-minute intro video .
What You'll Build
Core Technical Challenges :
- Train & Fine-tune SOTA Architectures : Adapt and optimize transformer-based models, vision-language models, and custom architectures for document understanding at scale
- Production ML Infrastructure : Design high-performance serving systems handling millions of requests daily using frameworks like TorchServe, Triton Inference Server, and vLLM
- Agentic AI Systems : Build reasoning-capable OCR that goes beyond extraction – models that understand context, chain operations, and provide confidence-grounded outputs
- Optimization at Scale : Implement quantization, distillation, and hardware acceleration techniques to achieve fast inference while maintaining accuracy
- Multi-modal Innovation : Tackle alignment challenges between vision and language models, reduce hallucinations, and improve cross-modal understanding using techniques like RLHF and PEFT
Engineering Responsibilities :
Design distributed training pipelines for models with billions of parameters using PyTorch FSDP / DeepSpeedBuild comprehensive evaluation frameworks benchmarking against GPT-4V, Claude, and specialized document AI modelsImplement A / B testing infrastructure for gradual model rollouts in productionCreate reproducible training pipelines with experiment trackingOptimize inference costs through dynamic batching, model pruning, and selective computationWe’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity.
Technical Requirements
Must-Have :
4+ years of hands-on deep learning experience with production deployments.Strong PyTorch expertise – ability to implement custom architectures, loss functions, and training loops from scratch.Experience with distributed training and large-scale model optimizationProven track record of taking models from research to productionSolid understanding of transformer architectures, attention mechanisms, and modern training techniques.B.E. / B.Tech from top-tier engineering collegesHighly Valued :
Experience with model serving frameworks (TorchServe, Triton, Ray Serve, vLLM)Knowledge of efficient inference techniques (ONNX, TensorRT, quantization)Contributions to open-source ML projectsExperience with vision-language models and document understandingFamiliarity with LLM fine-tuning techniques (LoRA, QLoRA, PEFT)Why This Role is Exceptional
Proven Impact : Our models approaching 1 million downloads – your work will have global reachReal Scale : Your models will process millions of documents daily for Fortune 500 companiesWell-Funded Innovation : $40M+ in funding means significant GPU resources and freedom to experimentOpen Source Leadership : Publish your work and contribute to models already trusted by nearly a million developersResearch-Driven Culture : Regular paper reading sessions, collaboration with research communityRapid Growth : Strong financial backing and Series B momentum mean ambitious projects and fast career progressionOur Recent Achievements
Nanonets-OCR model : ~1 million downloads on Hugging Face – one of the most adopted document AI models globallyLaunched industry-first Automation Benchmark defining new standards for AI reliabilityPublished research recognized by leading AI researchersBuilt agentic OCR systems that reason and adapt, not just extract