Talent.com
This job offer is not available in your country.
AI / ML Solution Architect

AI / ML Solution Architect

Fission LabsIndia
19 hours ago
Job description

Role : Solution Architect-AI / ML

Experience : 10+ years

Location : Hyderabad & Pune

Key Responsibilities :

Architecture & Infrastructure

Design, implement, and optimize end-to-end ML training workflows including infrastructure setup, orchestration, fine-tuning, deployment, and monitoring.

Evaluate and integrate multi-cloud and single-cloud training options across AWS and other major platforms.

Lead cluster configuration, orchestration design, environment customization, and scaling strategies.

Compare and recommend hardware options (GPUs, TPUs, accelerators) based on performance, cost, and availability.

Performance & Optimization

Conduct performance benchmarking, hardware comparisons, and cost-performance trade-off analysis.

Implement real-time monitoring and control systems with metrics collection, observability, and custom performance tracking.

Optimize cost models, budget predictability, and resource utilization.

Data & Training Pipelines

Architect and validate data pipelines with storage, persistence, and throughput optimization.

Oversee data quality validation, pre-processing, and long-term experiment tracking.

Support framework flexibility for diverse training techniques (supervised, unsupervised, fine-tuning, reinforcement learning).

Integration & Deployment

Ensure seamless deployment across multi-cloud environments with security, compliance, and regional availability considerations.

Collaborate with DevOps and MLOps teams for automation, fault tolerance, job scheduling, and orchestration testing.

Provide technical guidance on integration with existing enterprise systems.

Analysis & Recommendations

Lead result analysis, insight generation, and actionable recommendations for training performance and user experience improvements.

Present performance claims, benchmarking reports, and speculative decoding insights to stakeholders.

Technical Expertise Requirements

10+ years in architecture roles with at least 5 years in AI / ML infrastructure and large-scale training environments.

Expert in AWS cloud services (EC2, S3, EKS, SageMaker, Batch, FSx, etc.) and familiar with Azure, GCP, and hybrid / multi-cloud setups.

Strong knowledge of AI / ML training frameworks (PyTorch, TensorFlow, Hugging Face, DeepSpeed, Megatron, Ray, etc.).

Proven experience with cluster orchestration tools (Kubernetes, Slurm, Ray, SageMaker, Kubeflow).

Deep understanding of hardware architectures for AI workloads (NVIDIA, AMD, Intel Habana, TPU).

Performance & Cost Management

Demonstrated expertise in performance benchmarking, reliability testing, and training speed optimization.

Skilled in cost modeling, budget forecasting, and cost-performance balancing.

Monitoring & Observability

Experience with real-time monitoring tools (Prometheus, Grafana, CloudWatch) and custom metric instrumentation.

Familiarity with network performance testing, regional load testing, and multi-region deployment strategies.

Soft Skills

Strong problem-solving skills with an analytical mindset.

Excellent communication skills to present technical trade-offs and strategic recommendations to executives and engineering teams.

Ability to lead cross-functional teams and drive innovation in AI infrastructure.

Other Required Skills :

LLM Inference Optimization

Expert knowledge of inference optimization techniques including speculative decoding, KV cache optimization (MQA / GQA / PagedAttention), and dynamic batching.

Deep understanding of prefill vs decode phases, memory-bound vs compute-bound operations.

Experience with quantization methods (INT4 / INT8, GPTQ, AWQ) and model parallelism strategies.

Inference Frameworks

Hands-on experience with production inference engines : vLLM, TensorRT-LLM, DeepSpeed-Inference, or TGI.

Proficiency with serving frameworks : Triton Inference Server, KServe, or Ray Serve.

Familiarity with kernel optimization libraries (FlashAttention, xFormers).

Performance Engineering

Proven ability to optimize inference metrics : TTFT (first token latency), ITL (inter-token latency), and throughput.

Experience profiling and resolving GPU memory bottlenecks and OOM issues.

Knowledge of hardware-specific optimizations for modern GPU architectures (A100 / H100).

System Architecture

Design scalable inference systems meeting strict latency SLAs and throughput requirements.

Implement production patterns for request routing, load balancing, and model versioning.

Balance trade-offs between latency, throughput, cost per token, and model accuracy.

Create a job alert for this search

Solution Architect • India