Job Description
Develop algo focusing on CUDA acceleration, CPU to GPU conversion of image processing algos and models
Stay updated with emerging technologies and industry best practices. Bring new ideas to the team by exploring modern frameworks or tools that could enhance the platform’s capabilities.
Required Qualifications :
Education : Master’s degree in Computer Science or a related field, or Bachelor’s degree in Computer Science (or related) with at least 3 years of hands-on web application development experience
Experience : Minimum 3+ years of professional algo development experience focusing on CUDA acceleration, CPU to GPU conversion of image processing algos and models. Proven track record of delivering software projects from design to deployment.
Programming Skills : Deep understanding of operating systems, computer networks, and high performance applications. Good mental model of the architecture of a modern distributed systems that is comprised of CPUs, GPUs, and accelerators. Experience with deployments of deep-learning frameworks based on TensorFlow, and PyTorch on large-scale on-prem or cloud infrastructures. Strong background in modern and advanced C++ concepts. Strong Scripting Skills in Bash, Python, or similar.Good communication with the ability to write clean, efficient, and well-documented code.
Team Collaboration : Strong communication skills and ability to work effectively in a cross-functional team setting. Comfortable collaborating with diverse team members (developers, testers, domain experts) and conveying technical ideas clearly.
Adaptability : A willingness to learn new technologies and adapt to evolving project requirements. Enthusiasm for working in a dynamic, fast-changing environment where priorities may shift as the project grows
Preferred Qualifications (Nice-to-Haves) :
HPC / Simulation Background : Experience working on projects involving scientific computing, simulations, or HPC applications. Familiarity with parallel computing concepts or engineering simulations (CFD, FEA, etc.) can help in understanding the platform’s context
DevOps & CI / CD : Hands-on experience with DevOps tools and workflows. Knowledge of setting up CI / CD pipelines using platforms like Azure DevOps, Jenkins, or GitHub Actions to automate build and deployment processes
Containerization & Cloud : Experience with container technologies (Docker) and orchestration (Kubernetes) for deploying microservices is a plus. Familiarity with cloud services or infrastructure (Azure, AWS, etc.) for scalable deployment environments
Requirements
BigData and Hadoop Ecosystems~Digital : Apache Spark~Digital : Kafka Experience Range in Required Skills : 6 - 8yrs / / Considerable : Overall 5+ yrs Kafka admin - min 2.5yrs
Developer • Chennai, TN, in