Focus : Building and Deploying Scalable AI Systems
Goal : Create Robust, Scalable, and Efficient AI-Powered Solutions
Core Responsibilities :
- Develop and implement machine learning and deep learning algorithms using Python , TensorFlow , PyTorch , and Scikit-learn
- Design scalable model training and inference pipelines leveraging Docker , Kubernetes , MLflow , and Airflow
- Deploy models to cloud platforms such as AWS SageMaker , GCP Vertex AI , and Azure ML
- Integrate CI / CD workflows using GitHub Actions and infrastructure as code via Terraform
- Optimize models using ONNX , TensorRT , and techniques like pruning and quantization
- Manage large-scale data processing with Spark , Kafka , and Hadoop
- Work with structured and unstructured data stored in SQL , NoSQL , and GraphDBs
Tech Stack :
Programming : PythonML / DL Frameworks : TensorFlow, PyTorch, Scikit-learnMLOps Tools : MLflow, Airflow, Docker, KubernetesCloud Platforms : AWS (SageMaker), GCP (Vertex AI), Azure MLBig Data : Spark, Kafka, HadoopDatabases : SQL, NoSQL, GraphDBsDevOps : CI / CD, GitHub Actions, TerraformModel Optimization : ONNX, TensorRT, Pruning, QuantizationSkills Required
Ml, Nlp, Cloud, Ai, Python