MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Tuesday, 23 April 2024

Artificial Intelligence firm helped United Kingdom to spy on social media 

The firm was started six years ago by Lyric Jain, a 27-year-old Cambridge engineering graduate who first put the technology to the test during elections in his native India

The Daily Telegraph London Published 04.06.23, 05:19 AM
Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the government terms “disinformation” — false information deliberately seeded online.

Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the government terms “disinformation” — false information deliberately seeded online. Sourced by the Telegraph

An industrial estate in Yorkshire is an unlikely location for a state surveillance operation. But these blank-looking warehouses are home to an artificial intelligence (AI) company used by the government to monitor people’s posts on social media.

Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the government terms “disinformation” — false information deliberately seeded online — and “misinformation”, which is false information that has been spread inadvertently.

ADVERTISEMENT

It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic.

The firm was started six years ago by Lyric Jain, a 27-year-old Cambridge engineering graduate who first put the technology to the test during elections in his native India.

He deploys it alongside what Logically claims is “one of the world’s largest dedicated fact-checking teams”, spread between the UK, Europe and India.

It is a model that has helped the firm to secure a string of contracts.

It has a £1.2 million deal with the department for culture, media and sport (DCMS), as well as another worth up to £1.4 million with the department of health and social care to monitor threats to high-profile individuals within the vaccine service.

Other blue-chip clients include US federal agencies, the Indian electoral commission, and TikTok.

Facebook ‘partnership’

It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see.

A joint news release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue.

“When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.

Logically says it does not pass the evidence it collects for the UK government along to Facebook — but Logically’s partnership with the social media firm has sparked concerns amongst freedom of speech campaigners.

The AI firm was first contracted by the DCMS in January 2021, many months into the pandemic, when it was tasked with delivering “analytical support”.

Over time, its responsibilities appear to have grown, so that it was supporting “cross-government efforts to build a comprehensive picture of potentially harmful misinformation and disinformation”.

Documents obtained by The Telegraph show that it produced regular “Covid-19 Mis/Disinformation Platform Terms of Service Reports” for the Counter-disinformation Unit — a secretive operation in the DCMS. That title suggests the aim was to target posts that breached the terms of service of platforms like Twitter and Facebook.

However, details of the reports disclosed under data laws revealed that they also included logs of legitimate posts by respected figures including Dr Alex De Figueirido, statisticians lead at the Vaccines Confidence Project.

Nadhim Zahawi, the former minister, told The Telegraph he believes the inclusion of the tweet was a mistake. However, Logically said it sometimes includes legitimate-looking posts in its reports if they could be “weaponised”.

“Context matters,” said a spokesman.

“It is possible for content that isn’t specifically mis- or disinformation to be included in a report if there is evidence or the potential for a narrative to be weaponised.”

The spokesman added that the details obtained under data laws “often removes the reason for why content has been flagged and can therefore be very misleading”.

However, a public document produced by Logically appears to shed at least some light on the company’s thinking. The 21-page “Covid-19 Disinformation in the UK” report repeatedly referred to “anti-lockdown” and “anti-Covid-19 vaccine sentiment”.

It also highlighted the hashtags “#sackvallance” and “#sackwhitty” as evidence of “a strong disdain for expert advice”.

Follow us on:
ADVERTISEMENT