TITLE : DevOps / Edge AI Engineer
LOCATION : GREATER BENGALURU AREA
Company Description
We are looking for exceptional talent and leadership to join Fast Growing Startup into Scalable Intelligence, the world’s first company developing Agentic Silicon for powering the future of AI.
Founded in 2023, We have deep customer engagements across America, Europe, and Asia, and demonstrated functional prototypes to prove our concept and vision.
Job Description
Overview :
You will be responsible for building, deploying, and maintaining the local infrastructure that powers high-performance multimodal AI models (text, image, audio, video) on a compact AI appliance. You’ll bridge the gap between hardware, ML inference, and user-facing applications - ensuring reliability, scalability, and efficiency of on-device AI workloads.
Key Responsibilities :
- System Deployment & Orchestration
- Containerize AI inference services and web applications using Docker or Podman.
- Design lightweight orchestration layers for local systems (Kubernetes, Nomad, or custom orchestration).
- Automate build, test, and deployment pipelines (CI / CD) for local-first AI workloads.
- Performance Optimization & Resource Management
- Optimize compute utilization for concurrent multimodal workloads.
- Develop monitoring tools for system health, thermal management, and memory / bandwidth usage.
- Tune OS, drivers, and I / O subsystems for maximum throughput and low latency.
- Edge Infrastructure & Networking
- Configure low-latency local networking for browser-based access to the AI appliance.
- Set up secure local APIs and data isolation layers — ensuring zero external data leakage.
- Integrate hardware accelerators and manage firmware updates across different SKUs.
- Reliability, Testing, and Scaling
- Build test harnesses to validate multimodal model performance (e.g., LLM + diffusion + ASR pipelines).
- Implement over-the-air (OTA) update mechanisms for edge devices without exposing user data.
- Develop monitoring dashboards and alerting for real-time performance metrics.
Required Qualifications :
Strong background in Linux systems engineering and containerization (Docker, Podman, LXC).Experience deploying AI inference services locally or at the edge (llama.cpp, ollama, vLLM, ONNX).Proficiency in CI / CD tools (GitHub Actions, Jenkins, ArgoCD) and infrastructure-as-code (Terraform, Ansible).Expertise in GPU / accelerator optimization, CUDA stack management, or similar.Solid understanding of networking, security, and firewall configurations for local appliances.Scripting and automation skills (Python, Bash, Go, or Rust).Preferred Qualifications :
Experience with embedded systems or edge AI devices (e.g., Jetson, Coral, FPGA-based accelerators).Familiarity with low-bit quantization, model partitioning, or distributed inference.Background in hardware / software co-design or systems integration.Knowledge of browser-based local apps (WebSocket, WebRTC, RESTful APIs) and AI service backends.Prior work in privacy-preserving AI systems or local-first architectures.Contact
Sumit S. B
sumit@mulyatech.com
www.mulyatech.com
"Mining the Knowledge Community"
Practice Head(Talent Acquisition. Semiconductors Domain)