Ever since human beings learnt the art of giving verbal language a written form, it dramatically changed the way in which they expressed themselves and shared ideas. Ideas in the written form have a lasting impression since texts provide permanence to fleeting words, ideas and emotions. Kings, thus, had their edicts engraved across kingdoms; churches got scriptures canonised. But access to texts and the ability to read them were reserved for the privileged.
However, things changed drastically with the invention of the printing press. It changed the way knowledge was stored and disseminated. With the rise in literacy, reading was no longer the preserve of the privileged. Society also saw itself transformed with ubiquitous and inexpensive printed texts. Martin Luther’s German translation of the Bible became the first bestseller of the world which eventually changed Christianity. The printing press also catalysed the Protestant reformation.
With time, texts and the ideas encoded in them started to bring sweeping changes in almost every aspect of life. From the Renaissance to revolutions, ‘printed ideas’ played an important role to make people think in new ways. Even the birth of the modern nation-state is attributed to print capitalism that helped communities imagine themselves into nations. With the proliferation of printed material beyond the ambit of scripture and religion, the relation between text and truth lost its biblical finality.
But text is not only used to reveal truth or state facts; it is also used to conceal or distort them. Another technological breakthrough in the 21st century has made it more challenging to sieve truth from falsehood.
In November 2022, OpenAI, a company working in the field of artificial intelligence, publicly launched ChatGPT, an AI-powered chatbot that can interact with users in an intuitive, conversational way, serving detailed, written information on almost any topic. Its makers fed it with huge amounts of data. Besides, it also learns from users’ feedback.
AI-powered voice assistants have been there for almost a decade. Siri, Alexa and their siblings have become part of our lives without causing a revolution. However, ChatGPT is reckoned to permanently transform various aspects of our life, from the way business is conducted, policies are framed and education is accessed. Experts are also of the opinion that it is going to do the unimaginable — make Google redundant. However, ChatGPT is not another Google. What differentiates it from Google is that ChatGPT does not look for information on the internet; rather it generates content based on the data it is fed. What is interesting is that the text it produces has no author; yet it presents itself with a semblance of textual authoritativeness. One wonders about its consequences. Writers and authors are also worried that chatbots would soon replace them.
The content generated by the chatbot does not mention its source. Is it then offering a cleverly plagiarised text? To make itself convincing, it also blurs the distinction between facts and opinion, news and views. Another major concern is that it can write convincingly on things that did not take place in reality or do not exist at all. This means it can generate fake content instantaneously based on the preferences and commands of the user. The highly intuitive chatbot thus has the potential of opening the floodgate of fake news. Moreover, ChatGPT is no Siri or Alexa whose vocal replies disappear as soon as they are uttered. Texts are far more enduring, influential and powerful. In the present polarised world, these chatbots may serve as the manufacturer of ‘facts’ which would then be shared in an organised way to mould and manipulate opinions and choices.
The creators of ChatGPT are aware of these drawbacks and are struggling to find a solution. Till they do so, we should remind ourselves that all that is in print is not fact.
Nirupam Hazra is Assistant Professor, Department of Social Work, Bankura University