IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.