Description :
The Platform ML team at AiDASH is the core team for model deployments at scale and validation of model performance. All the product offerings within AiDASH are powered by ML models, which process geospatial, satellite imagery, and weather data. The SDET-3 will be responsible for validating the functional as well as scaling aspects of model performance.
Requirements :
- Experience 6-8 years in Testing + Automation.
- Good with testing fundamentals and best practices.
- Experience in ML application testing is desired. Exposure to the usage of a prompt-based editor (good to have).
- Experience 6-8 years in Testing + Backend Automation ( rest assured, Java / Python-pytest).
- Good with testing fundamentals and best practices.
- Experience in ML application testing is desired.
- Candidate should be good at coding (Java / Python), should be proficient in Python or Java, and willing to learn other languages on a need basis.
- Should have a good understanding of backend system design and exposure to databases (sql / nosql), at least one is mandatory.
- Should know the details of application deployment and underlying orchestration.
- Understanding of CI / CD pipelines - (good to have).
- Exposure to performance testing - (must have).
- Exposure to the usage of a prompt-based IDE like Cursor(good to have).
Key Responsibilities :
Design, develop, and maintain automated test frameworks for backend services and ML applications.Write efficient, reusable, and scalable test code using Java or Python (preferably pytest or Rest Assured).Own and drive test strategies across components and services, ensuring high-quality releases.Collaborate closely with developers, product managers, and DevOps teams to identify test requirements early and implement them across SDLC.Work on performance testing, identify bottlenecks, and propose solutions to improve system scalability.Contribute to CI / CD pipelines and improve the automation of testing workflows.Participate in code reviews and provide feedback on testability, performance, and scalability.Analyze production issues and proactively improve automation coverage.Work with ML engineers to test and validate ML model pipelines and related systems.Understand deployment workflows and orchestration tools (e.g., Docker, Kubernetes).Use and provide feedback on modern prompt-based IDEs such as Cursor (nice to have).(ref : hirist.tech)