Government Revises IT Rules, Sharpens Focus on AI-Generated Content
Mandatory labelling, faster takedown timelines, and stricter platform accountability aim to curb deepfakes, NCII, and other unlawful online material.
India, Feb 11 : The Centre on Wednesday notified amendments to the Information Technology (IT) Rules, formally placing artificial intelligence (AI)-generated material within a regulatory framework and introducing stricter compliance requirements for both users and digital platforms.
Scheduled to take effect from February 20, the revised rules mandate clear labelling of AI-created or altered content and require intermediaries to implement technical mechanisms to verify and highlight such material. Social media companies must also periodically inform users about AI-related regulations.
In a significant shift, the government has reduced the window for removing flagged unlawful content to just three hours from the earlier 36-hour deadline. Platforms must address user grievances within seven days instead of 15, while non-consensual intimate imagery (NCII) must be taken down within two hours of notification.
Officials from the Ministry of Electronics and Information Technology (MeitY) said the tighter timelines were introduced after repeated instances of harmful posts going viral within hours. The measures are intended to tackle the growing circulation of deepfakes, child sexual abuse material (CSAM), and privacy-violating content.
The amended rules broaden the definition of unlawful information to include material threatening national sovereignty, public order, decency, or friendly relations with foreign states, along with content amounting to defamation, contempt of court, or incitement to offences.
Industry stakeholders acknowledged improvements in the revised framework but flagged implementation challenges. Nasscom vice-president of policy Ashish Aggarwal noted that while the updated rules sensibly focus on synthetic content designed to mislead, the shortened compliance timeline may require closer coordination between government and industry to avoid unintended violations.
The guidelines no longer insist on fixed-size AI disclaimers for visuals and audio. Instead, platforms must ensure “prominent” labels embedded with metadata for quick identification. Routine edits made in good faith that do not materially alter content have been exempted.
The AI provisions primarily target significant social media intermediaries with over five million registered users in India, though officials clarified that all technology firms offering AI-enabled services must incorporate disclaimers where applicable. This could bring a wide range of tools—from chatbots and image generators to enterprise AI software under regulatory scrutiny.
Legal experts cautioned that determining whether AI content appears indistinguishable from reality introduces subjective standards that may be difficult to translate into engineering protocols.
Additionally, intermediaries are required to detect and prevent the posting of AI-generated material involving CSAM, NCII, falsified records, or privacy violations. Authorities emphasised that simply attaching an AI disclaimer will not shield offenders from action, signalling a tougher enforcement stance as India moves to regulate the rapidly evolving digital ecosystem.