In late July and early August 2024, a knife attack at a children’s event in Southport, the United Kingdom, was followed by a second wave of damage — not at the scene but online. False claims about the attacker’s identity, religion and motives spread rapidly on platforms such as X and Telegram. Far-Right accounts and anonymous channels pushed the misinformation in the name of ‘free speech’, fuelling anger and mistrust.
Within days, towns across the UK saw riots, vandalism and clashes with the police. What should have been a moment for facts and solidarity became a frenzy of digital vigilantism. This was not a healthy debate but a real-world illustration of how unregulated speech online can escalate into physical harm, showing how fragile public order becomes when misinformation is allowed to travel unchecked.
The inability of traditional systems, such as courts and law enforcement agencies, to deal with such situations effectively and on time stems from the sheer volume of ‘free speech’ being generated online. It has overwhelmed the traditional system of law enforcement and control to the extent that these have become ineffective. But can the stress on public order be used as an excuse to put restrictions on freedom of speech? This right is, after all, a cherished ideal of any modern society.
Freedom of speech has been at the very core of any modern-day democracy. It guarantees articulation of ideas and thoughts without fear of retaliation, censorship or legal sanction. The idea took shape in the late-sixth century BCE and the early-fifth century BCE in Athens. However, this right, though fundamental, is not absolute, and societies have placed restrictions on this ‘human right’ related to pornographic, libellous, and seditious content, among others.
‘Freedom of speech and thought’ is considered such a fundamental principle that the level of freedom of speech started defining the degree of democracy in a society. It was all fine until social media entered the picture as a medium of communication.
With social media assuming such control over the lives of people in a democracy, anyone is able to talk or write about anything. This may seem like tremendous progress, but can we say that democratic fervour has, indeed, increased in society compared to when social media was not a medium of communication? Truth be told, it doesn’t feel that way.
One reason, of course, is that social media is like an echo chamber. Its algorithms do not provide a counterpoint but give the users comfort by presenting before them similar-sounding thought processes. Its algorithms are basically designed to please the user. It revalidates the thought process.
Another reason is the presence of a large amount of content of all possible shades on any topic and of all imaginable qualities under the sun. There seems to be no proper method to distinguish among the good, the mediocre, or the downright ugly. There is no control on or screening of the mediocre drivel that is unleashed on various platforms every second. So one ends up giving weightage to ‘stories’ rather than facts. Experts have been replaced by influencers. And the only factor that seems to matter is the number of followers.
Policymakers are slowly recognising that online ‘free speech’ cannot remain an unregulated free-for-all. The European Union’s Digital Services Act now compels major platforms to reduce systemic harms, increase transparency around algorithms, and face penalties for non-compliance. The UK’s Online Safety Act 2023 imposes a statutory “duty of care” requiring platforms to manage both illegal and harmful content.
India, too, has begun moving in this direction through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which tighten due-diligence requirements for intermediaries, mandate faster takedowns once notified of illegal content, and alter the safe-harbour landscape they previously relied on. The proposed Digital India Act, currently under discussion, goes further by contemplating explicit obligations around algorithmic accountability, user harm, and age-gating for sensitive content.
However, none of these efforts seems very effective as the focus of regulatory authorities appears to be misdirected. Let us try to understand the ‘core’ issue here — the commercial model of social media companies.
Although there are many adverse fallouts, we will concentrate on the in-built toxic model of today’s social media. It is the present commercial model of social media that is responsible for the kind of mindless, shallow, and often skewed content that is exhibited.
Social media companies are run on vast data centres spread across different continents. Who pays for the massive infrastructure (compute, storage, connectivity), humongous electricity bills, and fancy salaries of employees? Social media users pay for these: social media companies have turned users into commodity. But social media users are blissfully unaware of this and revel in the feeling of having the world at their fingertips for free.
Social media companies have perfected the art of turning the user into a commodity. Typically, the model adopted is the following: the higher the engagement (views, likes, shares) of any article/post/video, the higher the rewards. That is precisely why one finds people randomly dancing on busy streets to cater to viewers looking for something beyond the ‘ordinary’ or the mundane. However, this has led to a set pattern of ‘shock and awe’ to draw eyeballs. The idea now is no longer to present an idea or a thought but to create a sensation. Why? Because content creators are rewarded on the basis of the traffic they draw, not on the basis of the soundness of their content. This reward model also promotes ‘shock and awe’ content-generation by incentivising viewership. These companies harvest user data, which allows them to send advertisements to users depending upon profile, geographical location and tastes.
This personalised, targeted advertisement is far more effective than generalised advertisements. Global corporates have thus started ploughing their advertisement budgets through social media companies. In short, these companies have become advertisement moguls.
In its initial days, social media was envisioned as a digital town square but the intersection of unrestrained free speech with aggressive monetisation has engineered a breeding ground for toxicity. When platforms prioritise engagement metrics to maximise revenue for the content creator, ‘freedom’ to speak often transforms into a race for the most extreme and ludicrous thoughts.
The solution lies not in ‘regulating’ freedom of speech but a subtle dissociation of ‘freedom of speech’ from the ‘right to earn money’. In fact, social media policy does not let official handles of government make money on the basis of views and likes. The same principle should be applied to channels trying to give opinions and points of view. Another way could be to ‘demonitise’ content that is reported against for being libellous. This will ensure that no content is created simply to earn money while hiding behind the protective shield of ‘freedom of speech’. This automatic reward for content on the basis of views and likes needs a thorough look, especially for channels that claim to propagate views, opinion and news.
To navigate this digital crisis, the way ahead lies in a hybrid model that shifts the focus from content policing to algorithmic monetary reward policy. While greater stringent legal ambit is necessary, it should not target individual speech — which risks State-backed censorship — but rather the business models that profit from harm. Ultimately, the solution is not to silence the user but to regulate the megaphone, ensuring that the freedom to speak no longer includes the right to earn money through engineered outrage.
Rajeev Kumar is a former DGP of West Bengal Police. Views expressed in the article are personal