Job Summary :
We are seeking a highly experienced and skilled Machine Learning Software Engineer with 8-10 years of experience to join our team. The ideal candidate will be a deep learning expert with a strong background in optimizing and deploying machine learning models on specialized hardware, particularly ML accelerators. This role is critical for bridging the gap between theoretical model development and practical, high-performance inference on target platforms. A key focus of this position will be on model quantization and other optimization techniques to maximize efficiency and performance.
Key Responsibilities :
- Model Porting & Deployment : Port and deploy complex deep learning models from various frameworks (e.g., PyTorch, TensorFlow) to proprietary or commercial ML accelerator hardware platforms (e.g., TPUs, NPUs, GPUs).
- Performance Optimization : Analyze and optimize the performance of ML models for target hardware, focusing on latency, throughput, and power consumption.
- Quantization : Lead the efforts in model quantization (e.g., INT8, FP16) to reduce model size and accelerate inference while preserving model accuracy.
- Profiling & Debugging : Utilize profiling tools to identify performance bottleneck and debug issues in the ML inference pipeline on the accelerator.
- Collaboration : Work closely with the ML research, hardware, and software teams to understand model requirements and hardware capabilities, providing feedback to improve both.
- Tooling & Automation : Develop and maintain tools and scripts to automate the model porting, quantization, and performance testing workflows.
- Research & Innovation : Stay current with the latest trends and research in ML hardware, model compression, and optimization Qualifications :
- Experience : 8-10 years of professional experience in machine learning engineering, with a focus on model deployment and optimization.
Technical Skills :
Deep expertise in deep learning frameworks such as PyTorch and TensorFlow.Proven experience in optimizing models for inference on GPUs, NPUs, TPUs, or other specialized accelerators.Extensive hands-on experience with model quantization (e.g., Post-Training Quantization, Quantization-Aware Training).Strong proficiency in C++ and Python, with experience writing high performance, low-level code.Experience with GPU programming models like CUDA / cuDNN.Familiarity with ML inference engines and runtimes (e.g., TensorRT, OpenVINO, TensorFlow Lite).Strong understanding of computer architecture principles, including memory hierarchies, SIMD / vectorization, and cache optimization.Version Control : Proficient with Git and collaborative development workflows.Education : Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related fieldPreferred Qualifications :
Experience with hardware-aware model design and co-design.Knowledge of compiler technologies for deep learning.Contributions to open-source ML optimization projects.Experience with real-time or embedded systems.Knowledge of cloud platforms (AWS, GCP, Azure) and MLOps best practices.Familiarity with CI / CD pipelines and automated testing for ML models.Domain knowledge in areas like computer vision, natural language processing, or speech recognition.(ref : hirist.tech)