ADVERTISEMENT

Blurred lines

AI has achieved something more radical where reality and simulation have become technically indistinguishable, and the platforms that mediate both have no commercial incentive to help us tell them apart

Jean Baudrillard Sourced by the Telegraph

Apar Gupta
Published 25.03.26, 07:56 AM

Jean Baudrillard’s provocation that the Gulf War “did not take place” was not a denial that hundreds of thousands died in Iraq due to armed conflict but rather a structural claim about mediation. The war experienced by the world’s publics, Baudrillard argued, was a television production that was narrated, sanitised, unmoored from the material suffering it depicted. The gap between event and representation had become the event itself, and war had been subsumed by its own image. Ultimately, the armed conflict served as content.

Almost 35 years later, generative AI and a world seethed in armed conflict have made Baudrillard’s claims stronger. What television began, currently large language models and Deepfakes perpetuate on scale. The mediation is no longer passive when cameras point at events and anchors interpret them as per underlying biases. Today, the challenge to information integrity is all enveloping as most are under the perpetual hypnosis of their smartphones. Entire events are being fabricated and evidence is being manufactured. As the WITNESS researchers, Shirin Anlen and Mahsa Alimardani, documented in their March 2026 analysis for Tech Policy Press, we have entered a phase where the tools designed to detect manipulation have themselves become instruments of manipulation. They call it “forensic cosplay” when, for instance, fabricated heatmaps and technical visualisations provide a veneer of scientific authority to predetermined conclusions. In one documented case, the viral “ERFI” thread claiming a photograph of The New York Times from Tehran was AI-generated reached over 600,000 views. The analysis was applied to a screenshot of an Instagram post, not the original image. The fact checking does not matter, as it is often the case, being belated and in any instance not stemming the virality of misinformation of the underlying content.

ADVERTISEMENT

The deeper crisis is not that false content circulates as propaganda, which has always circulated in wartime, but that it circulates through infrastructure designed to maximise engagement, not truth. With low trust in traditional news sources, many, if not most, turn to social media for real-time updates. Here, the platforms are not innocent conduits caught in a crossfire as their system design itself often encourages it. Their recommendation algorithms, autoplay functions and engagement optimised feeds are architecturally incapable of distinguishing a verified report from a Deepfake calibrated to trigger outrage. Worse, often the Deepfake is algorithmically advantaged as it is designed to provoke precisely the emotional response that drives sharing, commenting, and obsessive replaying in which platforms profit either way.

An analysis of over 30 articles published by Tech Policy Press between 2021 and March 2026 reveals a consistent finding across conflict zones, from Gaza to Ukraine to Iran, that the information environment has itself become a theatre of war, and social media platforms are its willing or negligent stage managers. Nusrat Farooq’s July 2024 analysis establishes the baseline that generative AI eliminates the need for the language skills or technical sophistication that characterised earlier influence operations. The Stanford Internet Observatory and Georgetown CSET have confirmed there is no technical silver bullet against LLM-generated disinformation. Yet platforms have systematically dismantled the very infrastructure that might have helped as over the past three years trust and safety teams have been gutted and sometimes even State media labels have been removed. Due to political movements from right-wing interests in the United States of America, content moderation has been reframed as censorship by the very forces that benefit most from information chaos.

The consequence is a vicious cycle that Prithvi Iyer, synthesising WITNESS’s September 2024 report, describes through two dynamics. First, “plausible deniability”, when real evidence can now be dismissed as AI-generated. Second, also “plausible believability”, that synthetic content confirming existing biases is accepted without question. Together, these dynamics do not merely pollute the information environment. They destroy the epistemic foundations on which democratic discourse depends. When everything could be fake, nothing needs to be believed and the citizen retreats from institutional media into algorithmically-curated echo chambers that tell them what they already want to hear.

The problem is not confined to any single conflict. It is structural, and it is global. But its Indian manifestation carries particular democratic dangers. Bulletin of the Atomic Scientists has explicitly warned that Deepfakes during India-Pakistan crises could cause “catastrophic misperception and miscalculation” between nuclear-armed states. When a fabricated video of a military chief admitting defeat circulates to hundreds of thousands before it is debunked, the window for escalation driven by public fury, political pressure, or military miscalculation is terrifyingly narrow. Moreover, modern armed conflict also provides a cover for widespread censorship. During Operation Sindoor, over 1,400 URLs were blocked under Section 69A of the Information Technology Act. This is a pattern being witnessed over the past three weeks with satirical and parody posts on the prime minister being blocked on social media. Additionally, preventive internet shutdowns were imposed across Jammu and Kashmir. The State’s instinct, confronted with information disorder, is not to invest in media literacy or support independent verification but to reach for the blunt instrument of the blackout. The result is perverse as those cut off from institutional channels turn precisely to the trickles of rumour and misinformation.

What would a serious response look like? First, legal liability for platforms that algorithmically amplify synthetic content during active hostilities. Not the blunt instrument of a blanket ban, but the targeted obligation to degrade recommendation amplification of unverified conflict content with a duty of care, not a duty of censorship. Second, public investment in verification infrastructure with media literacy programmes, open source detection tools, and support for independent fact-checking organisations that are currently outgunned and underfinanced. Third, international legal frameworks that treat the weaponisation of the information environment during armed conflict as a matter of humanitarian law and not merely a question of platform governance.

Baudrillard wrote about simulation replacing reality. AI has achieved something more radical where reality and simulation have become technically indistinguishable, and the platforms that mediate both have no commercial incentive to help us tell them apart. Social media, powered by generative AI, has created an omnidirectional assault on epistemic coherence. The simulation is no longer produced by a few gatekeepers. It is produced by everyone, for everyone, optimised by algorithms that treat engagement as the sole metric of value. We must recognise that every war is Baudrillard’s Gulf War except that now, nobody even pretends to look for the truth.

Apar Gupta is the Founder Director of the Internet Freedom Foundation, New Delhi

Op-ed The Editorial Board Artificial Intelligence (AI) West Asia Conflict Israel-Iran War Gulf War
Follow us on:
ADVERTISEMENT