Test generative AI solutions on AWS validating LLMs RAG prompts and agentic workflows with n8n and Python libraries like deepchecks and LangChain
Key Requirements
1) Test LLM outputs on AWS Bedrock using boto3
2) Validate finetuned LLMs on AWS SageMaker with deepchecks
3) Verify langchain prompts in AWS environments
4) Test RAG pipelines with AWS OpenSearch and langchain
5) Validate AI agents crewaiautogen on AWS Lambda
6) Test n8n agentic workflows eg n8nioworkflows6270
7) Ensure deployment stability on Amazon ECSEKS with Docker
8) Monitor performance with Amazon CloudWatch and wandb
Must Have Skills
5 years of QA automation
2 years testing GenAI LLMs using Python
Expertise in AWS Bedrock SageMaker Lambda boto3
Proficiency in deepchecks langchain crewai autogen wandb
Experience testing n8n workflows RAG prompts
Preferred Skills
AWS certification Machine Learning Solutions Architect
Familiarity with llama index n8n templates
Mandatory Skills : Agentic Framework, AI / Generative AI, Jenkins, User Acceptance Testing, Functional / System Testing, InSprint Testing, Regression Testing, SQL & Database testing, RTM -Testing, Selenium-Java -Testing, SIT -Testing, Test Design and Execution -Testing, Test Reports and Dashboards -Testing
Quality Specialist • Aligarh, IN