The ministry of electronics and information technology has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 that will require social media platforms to mandate that users declare any Artificial Intelligence-generated or AI-altered content. What is more, the duty to identify what has been manipulated will no longer fall solely on users. Large AI providers, such as OpenAI, Google and Anthropic, will also be treated as intermediaries, accountable for whatever passes through their systems. The urgency is justified. Deepfake impersonation has become a thriving cottage industry for criminals and political mischief- makers. In several instances, reputations, livelihoods and even personal safety have been harmed by AI-generated content. So much so that no less than the prime minister had highlighted the crisis of Deepfakes on the occasion of Independence Day. Several celebrities have also approached courts to secure protection for personal rights. Courts have responded to such infringements or threats with injunctions, expanded personality rights, and have had strong words for platforms that drag their feet on the matter. But India cannot rely forever on improvised remedies stitched from older laws written for a friendlier internet. As such, the draft is a timely intervention.
Yet the proposed draft carries significant concerns. For instance, the proposal makes no attempt to distinguish malicious deception and Deepfake from harmless creativity. The regulatory hammer can fall on both a scammer and, say, a school student editing a picture with a friend. There is also the danger of expanded surveillance being weaponised as a tool to silence dissent. The line between protection and control can blur quickly, especially when the State decides what counts as harmful. Other countries have better models that India can emulate. The European Union requires metadata tagging of AI content while placing the burden on platforms to assess context and risk. China demands visible warnings only on synthetic media that influences public opinion. Both approaches at least acknowledge that not all AI content is a threat. A smarter Indian framework should define Deepfake offences narrowly, protect satire and genuine artistic work, and impose stricter obligations only on those manipulating facts to cause harm. Platforms can and should tag risky content and Deepfakes deserve regulation. But over-regulation may push deception further into the shadows while dimming the bright — creative — side of AI.





