Overview : We are looking for evaluators proficient in one or more Indian languages to review and compare AI-generated responses. The role focuses on identifying toxic or harmful content across native scripts and transliterated text and assessing model performance across multiple datasets.
Rate per hour : INR 450
Minimum Hour Commitment per day : 5-6 hours per day
Key Responsibilities :
- Evaluate AI model outputs in various Indic languages (native scripts and transliteration)
- Identify and flag toxic, harmful or hate-based content, including subtle or context-dependent cases
- Compare model responses and provide performance assessments based on predefined criteria
- Classify the type and severity of toxicity, e.g. hate speech, harassment, abusive language
- Provide brief explanations for flagged items where required
- Ensure consistency, accuracy and adherence to project guidelines
Qualifications / Required Skills :
Proficient in English and your native languageMinimum 1 year of experience in content writing, content moderation, linguistic evaluation or a similar domainStrong understanding of cultural nuances, slang and context-dependent expressionsAbility to identify toxicity in both native script and transliterated formats, e.g. Hinglish, TamlishGood analytical and evaluative skillsPrior experience in content moderation, linguistic evaluation or data annotation is a plusEducation
Minimum of a bachelor's degree in any field, i.e. Humanities, Linguistics, Mass Communication or related disciplines.If you are interested in this opportunity, kindly fill the given form : https : / / forms.gle / vwfevEgk9ZNmGoVu6