MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 13 February 2026

Social media platforms must take down unlawful content within three hours, govt orders

The earlier deadline for taking down such content was within 36 hours of being notified about it; new rule could be a challenge for X, Meta

Our Web Desk, Reuters Published 10.02.26, 07:13 PM
Representational image

Representational image Shutterstock picture.

The government said on Tuesday social media companies would have to take down unlawful content within three hours of being notified about it, tightening an earlier 36-hour timeline in what could be a compliance challenge for Meta, YouTube and X.

The changes amend India's 2021 IT rules, which have already been a flashpoint between Prime Minister Narendra Modi's government and global technology companies.

ADVERTISEMENT

The amended rules also relaxed an earlier proposal that would have required platforms to visibly label AI-generated content across 10 per cent of its surface area or duration, instead mandating that such content be "prominently labelled".

The new regulations will take effect from February 20.

The amendments define "audio, visual or audio-visual information" and "synthetically-generated information", covering AI-created or altered content that appears real or authentic. Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.

Key changes include treating synthetic content as 'information'. AI-generated content will be treated on par with other information for determining unlawful acts under IT rules.

User grievance redressal timelines have also been shortened.

The rules require mandatory labelling of AI content. Platforms enabling creation or sharing of synthetic content must ensure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible, it said.

Calling for ban on illegal AI content, it said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.

Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.

The tighter timeline marks the latest escalation in India's efforts to control online speech, with a takedown regime that has drawn criticism from digital rights advocates and prompted clashes with companies including Elon Musk’s X.

Facebook-owner Meta declined to comment on the changes, while X and Alphabet's Google, which operates YouTube, did not immediately respond to requests for comment.

The rules add to mounting global pressure on social media companies to police content more aggressively, with governments from Brussels to Brasilia demanding faster takedowns and greater accountability.

India's IT rules empower the government to order the removal of content deemed illegal under various laws, including those related to national security, public order and sexual offenses.

The country has issued thousands of takedown orders in recent years, according to platform transparency reports.

RELATED TOPICS

Follow us on:
ADVERTISEMENT
ADVERTISEMENT