In this role, you will analyse, label, and categorise text content to help train AI models to recognise and respond appropriately to sensitive, harmful, or policy-violating language.
Your work will directly contribute to building safe, inclusive, and responsible AI systems across multiple languages and cultural contexts.
Languages Required
We are hiring for the following languages :
English, Hindi, Bengali, Marathi, Telugu, Tamil, Gujarati, Urdu, Kannada, Malayalam, Punjabi (Panjabi), Assamese, Oriya (Odia), Sindhi, Sanskrit, Kashmiri, and Nepali.
Key Responsibilities
Annotate and categorise text data according to defined Trust & Safety policies and content guidelines.
Identify and label harmful or policy-violating content (e.g., hate speech, sexual content, self-harm, violence, misinformation, discrimination).
Maintain linguistic and cultural accuracy while applying labeling decisions.
Participate in calibration and quality review sessions to ensure consistent judgment.
Suggest improvements in annotation instructions and examples.
Meet productivity and quality targets within defined timelines.
Required Skills & Qualifications
Bachelor’s degree in Linguistics, Literature, Mass Communication, Journalism, or a related field.
Native or near-native proficiency in one or more of the listed languages, along with good English comprehension.
Strong analytical, comprehension, and decision-making skills.
Ability to follow complex Trust & Safety guidelines with consistency.
Comfortable reviewing sensitive or potentially disturbing content.
Prior experience in data annotation, content moderation, translation, or linguistic QA preferred.
Familiarity with annotation tools or content labelling platforms is an advantage.
Language • India