MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Monday, 04 May 2026

Echo rights reserved

How AI deepfakes is turning trademarking into a celebrity survival strategy

Mathures Paul Published 03.05.26, 11:57 AM
Taylor Swift

Taylor Swift

Taylor Swift and her celebrity friends are entering their trademark era. Following in the footsteps of Matthew McConaughey, Swift has filed trademark applications to protect her identity.

The pop star’s company, TAS Rights Management, submitted three new applications to the US Patent & Trademark Office. Two relate to her voice — specifically the phrases “Hey, it’s Taylor Swift” and “Hey, it’s Taylor”. The third covers a popular image of the singer from her recent Eras tour, showing her in concert, holding her pink guitar and dressed in a shimmering bodysuit.

ADVERTISEMENT

Advances in AI have made it remarkably easy to synthesise a voice from a short clip alone, something that once required lengthy recordings and months of work. Unsurprisingly, celebrities are growing increasingly concerned about the unchecked use of their image and voice by AI platforms.

Swift is not alone. In January, McConaughey trademarked his identity to combat AI-generated copycats. The Oscar-winning actor, best known for his role in Dallas Buyers Club, is recognised for the line “all right, all right, all right” — and has now received approval for eight trademark applications relating to his voice and likeness. These include audio of him delivering that celebrated line from his 1993 breakthrough film Dazed and Confused, a seven-second video clip of him standing on a porch, a three-second clip of him sitting in front of a Christmas tree, and audio of him saying “just keep livin’, right?”

Lawyers for the actor said the filings were designed to prevent apps and individuals from using AI to mimic his voice or likeness without permission. “In a world where we’re watching everybody scramble to figure out what to do about AI misuse, we have a tool now to stop someone in their tracks or take them to federal court,” said Jonathan Pollack, one of his attorneys.

Beyond Hollywood

Celebrities from Oprah Winfrey to Nigella Lawson have fallen victim to deepfake images, video, and audio created using AI.

The concern is not without cause. Elon Musk’s chatbot Grok has previously been linked to the creation of manipulated and sexualised images, underscoring how easily both public figures and private individuals can be exploited.

Nor is this a purely American phenomenon. Motoring journalist and Who Wants to Be a Millionaire? host Jeremy Clarkson has taken a similar step in the UK, filing trademarks over his photographic likeness after deepfake scammers used his face and voice to promote cryptocurrency schemes. “It’s for perfectly good reasons — it’s not just my ego running amok,” he said.

Model Katie Price, meanwhile, became the first British star to trademark her own AI image, signing a six-figure deal with British company OhChat to revive and trademark her former alter ego, Jordan, as an AI version of herself.

The British government announced in January that it would fast-track new laws criminalising the creation of sexual deepfakes under the Online Safety Act. Prime minister Keir Starmer was unequivocal: “It’s unlawful. We’re not going to tolerate it. It’s disgusting. X needs to get their act together and get this material down.”

While Right of Publicity laws offer some protection against the unauthorised use of a famous individual’s likeness, trademark filings can provide an additional and more robust layer of defence — something celebrities are increasingly keen to exploit.

As intellectual property attorney Josh Gerben noted, Swift is not merely seeking to trademark a catchphrase; she is pursuing federal protection for the sound of her own voice saying it. “Taylor’s trademark filings suggest a broader shift in how celebrities are applying trademark law to fight back against AI,” he wrote.

The timing is significant. In December, US president Donald Trump signed an executive order seeking to limit individual states from enforcing their own AI legislation. It’s a move that could undermine protections such as Tennessee’s Elvis Act, which safeguards artistes’ voices and likenesses.

The problem, of course, is not new. In 2023, Scarlett Johansson’s attorney demanded that an AI app stop using her likeness in an advertisement. The following year, she called out OpenAI for using a voice “eerily similar” to hers for their GPT-4o chatbot, despite having declined the company’s request.

Tom Hanks, meanwhile, has raised the alarm over “multiple ads over the Internet falsely using my name, likeness, and voice promoting miracle cures and wonder drugs”. Breaking Bad star Bryan Cranston has voiced similar concerns about OpenAI’s Sora 2 product and its ability to replicate celebrities’ likenesses without permission.

Earlier this year, around 800 creatives — including Johansson, Cate Blanchett, Jodi Picoult, and Joseph Gordon-Levitt — backed a campaign called Stealing Isn’t Innovation, accusing tech firms of using American creators’ work to “build AI platforms without authorisation or regard for copyright law”.

In India, the issue is equally pressing. The Bombay High Court granted the late Asha Bhosle ad-interim protection of her personality and moral rights, restraining AI platforms from cloning her voice or exploiting her image. Anil Kapoor was among the early movers, approaching the Delhi High Court in 2023 to seek protection of his name, voice, signature, and image rights.

Last month, YouTube unveiled a deal with several talent agencies to open up its proprietary deepfake detection tool to celebrities and entertainers, making it easier to request that unauthorised likenesses be removed from the platform — a small but meaningful step in what is fast becoming a defining legal battleground of the AI age.

— Mathures Paul

Follow us on:
ADVERTISEMENT
ADVERTISEMENT