A leopard prowling inside a shopping mall, a tiger sipping liquor beside a forest road, an elephant tumbling from a cliff onto a truck below. None of it happened. Yet in 2025, millions watched, shared and reacted as if it had.
The year 2025 will also be remembered as the year that saw the rapid rise of artificial intelligence (AI)-generated and heavily manipulated wildlife videos across India.
They blurred the lines between reality and fabrication at an unprecedented scale.
Experts warn the videos do more than misinform. They invent animals that behave like humans, appear in places animals would rarely enter, and attack in ways they seldom do.
The problem is no longer confined to harmless hoaxes. In city after city in 2025, fake wildlife videos have forced forest departments to issue repeated clarifications. In some cases, they have triggered police action as well.
In Pune, where leopard sightings have been confirmed over the past month in areas such as Aundh, Bavdhan, Pashan-Sutarwadi and near the airport, the Maharashtra forest department recently issued a warning.
Anyone creating or knowingly circulating fake images or messages about leopards in the city, officials said, could face criminal cases.
The caution came after the department found itself repeatedly responding to AI-generated images of leopards in urban neighbourhoods.
That pattern has repeated itself across the country.
In October, residents of Lucknow woke up to viral posts claiming leopards were roaming through Ashiana, Ruchi Khand and Gomti Nagar. The images looked convincing enough to prompt alarm and an immediate response from the forest department. Officials searched the areas for signs like pugmarks, CCTV footage from real cameras.
They found nothing.
Closer examination revealed the visuals were entirely AI-generated. A youth was later arrested for editing and spreading the clips to make them appear real.
Authorities confirmed there were no leopards in the city at all.
Earlier this year, a similar wave of panic followed a viral clip showing a leopard inside Mumbai’s Phoenix Marketcity Mall. The video ricocheted across social media before experts confirmed it was a deepfake. No animal had entered the mall.
“What complicates this issue is that many of these fake images claim animal presence in areas where leopards or tigers have genuinely been sighted,” said Ravikant Khobragade, a wildlife veterinarian and head of the Rapid Rescue Team at Tadoba Andhari Tiger Reserve.
The scale and audacity of such fabrications have been growing.
Around 31 October, a clip went viral showing a man casually petting a tiger and offering it alcohol near Pench Tiger Reserve. Forest officials confirmed the video was entirely fabricated.
But the idea it planted, that wild tigers can be approached and fed, worried experts.
These videos, Khobragade told The Telegraph Online, misrepresent how wild animals behave, suggesting, for instance, that a leopard would wander into a backyard, be chased away by a domestic cat, or that a tiger would sit calmly near humans drinking alcohol.
“Such portrayals project human traits onto animals and reinforce ideas that are far from reality.”
Tigers do not seek human contact and cannot be fed or approached that way. Each viral clip, even when debunked, leaves behind a residue of misunderstanding.”
Forest departments do issue press releases and try to trace those responsible, Khobragade said. Cases can be registered, videos investigated and, if uploaded online, removed. But once content spreads through private messaging platforms like WhatsApp, tracing the source becomes far harder.
“That is where the damage travels faster than the correction,” he said.
The trend is not restricted to big cats.
On 9 December, a hyper-realistic video showing an elephant falling from a cliff onto a moving truck spread rapidly online. Users claimed it depicted a tragic accident involving India’s strained elephant corridors.
It was entirely AI-generated. No such incident had taken place anywhere in the country.
Satwik Vyas, divisional forest officer of Dumka in Jharkhand, described the challenge as one of scale and speed. AI content, he said, is becoming increasingly imaginative and niche, reaching a point where many viewers can no longer tell what is real.
The forest department does not yet have the tools or dedicated systems to counter it effectively, especially in regions that are not designated national parks but still have wildlife presence, he said.
“In places like Bokaro, Hazaribagh and Dumka, where elephants and leopards move through human landscapes, a 30-second sensational reel travels further than nuanced explanations,” he told The Telegraph Online.
Panic, he said, has consequences beyond fear: people flee homes, creating opportunities for theft, while communities living alongside wildlife become more sceptical and anxious.
Arrests and takedowns are possible, Vyas said, but enforcement is complex.
For example, a fake bear video traced to a person in another state became a non-bailable IT offence involving shifting IP addresses and VPNs. Jharkhand, he added, does not yet have a dedicated social media cell to respond quickly.
How to tell if a video is AI generated
Conservation groups say public vigilance is now as important as official action.
Wildlife SOS, one of the largest wildlife organisations in South Asia, has urged viewers to slow down and scrutinise what they see, like watching eyes and blinking patterns, checking whether movement looks too smooth, noticing warped backgrounds or mismatched shadows.
Sensational clips with no credible source, they warn, are often the biggest red flags.
Another giveaway is non-mention of the exact place, with a vague descriptor.
The organisation also cautions that sound and context are increasingly being weaponised. In many viral clips, voices are overly clean, emotions do not match facial expressions, or background noise loops in ways that real environments do not.
Wildlife SOS urges reverse-searching key frames and, where possible, using AI-detection tools that analyse visual and audio patterns for signs of manipulation. Above all, they stress trusting the uneasy feeling many people experience when something looks “off.”
That instinct, they say, is not paranoia but pattern recognition, an early warning system that technology has not yet learned to defeat.