There is a growing buzz about Artificial Intelligence. New technologies usually have strong proponents who indulge in hype to promote their positive aspects. In this case, the advocates are the technology companies of Silicon Valley. There are sceptics, too, who focus on labour market disruptions and other dystopic possibilities, evident in the many films emerging from Hollywood. The actual outcomes typically lie somewhere in the middle. As far as AI is concerned, the usual discourses of hype and gloom are widely observable. For instance, the hyped-up picture suggests that in the not-too-distant future, people will not have to work unless they choose to and can pursue their own life goals at ease, and the rest would be taken care of efficiently by technology. On the other end of the debate, there are uncomfortable suggestions that machines might become increasingly like human beings, with learning on the job, upgrading of skills, autonomous decisions leading to actions, and the ultimate overwhelming of the human species all being distinct possibilities.
Some disturbing questions arise. If the AI agents become increasingly human, would they constitute a nascent form of non-organic life? Could they go rogue and create mischief? Would that form of life be the next stage of evolution? Would humans be overtaken — first by losing their jobs to machines and then by becoming redundant as a species? All these changes will take time. The critical uncertainty is not whether it will happen, but rather how fast and when? In all this, there are not only questions of technology and economics but also of ethics, regulations and politics. The big issue relates to the possibility of AI evolving into an existential threat to human beings.
Machines can be trained. That is how a computer is made to do routine tasks, usually repetitive in nature. A massive volume of data, usually consisting of text and code, is fed into it from as many sources as possible. Incidentally, this process is extremely energy-intensive and is supposed to emit a lot of extra carbon dioxide, thereby enhancing climate risks. These ‘trained’ machines can perform a variety of tasks, such as translation, making textual summaries, content creation, and answering queries. These Generative AI, as they are referred to, can be of help in augmenting human efforts. This set of AI agents can put together text, speak in a human voice, create images, and even construct videos. The machines can learn by analysing patterns from a very large set of existing data. Most of us have already started using ChatGPT or some other version of generative AI.
There is another form of AI being developed that is more sophisticated, namely Agentic AI. Here, the AI agent can undertake autonomous decision-making and execute actions on specific goals without constant human intervention. It can automate complex workflows, manage financial risks, optimise operations. This AI has an important feature: unlike previous technologies, this AI can decide on its own and act on that decision. This is called ‘agency’ in philosophy, a trait that the human species is supposed to possess. One can well imagine the tasks that can be assigned to such an AI agent. The AI agent is never sick or tired and works at a constant pace, far greater than that achievable by any human enterprise. Can it make mistakes? Usually no. However, scientists have found that sometimes machines get things wrong or fail to deliver comprehensible solutions. This is called a state of hallucination for an AI agent. Since autonomous machines can decide on their own, they could come up with a new solution not hitherto known or tried out, but this could also be a hallucination. It would be difficult to distinguish.
A trained machine can be instantly duplicated into a million machines by simply copying the information. For humans, we have to teach each individual separately and even then, their actual learning outcomes would be uneven. Networked computers learn exactly the same in an instant of time. The bottom line is: a machine can have agency, and it can potentially outpace human thought.
The ability of a machine to act autonomously and the learning that can be duplicated instantly are issues that require serious reflection. A nascent form of non-organic life appears to be possible on this planet. That is quite astonishing. If these entities can learn and act on their own, their capabilities would be far more than those possessed by humans. Could we be overwhelmed by these entities making the human species ultimately redundant to the new form of non-organic life? This state of AI, according to most experts, is not yet just around the corner, although the development of AI is happening at an exponential rate. A time-lapse of five to ten years can make that happen.
There are more concerns about the present, emanating from the advent of AI. The first is the nature of the economic system that is moving from brick and mortar to virtual spaces. Data become the new input of value. The giant tech companies use it but do not pay for the inputs. Indeed, we as users pay them for the data we create. Take Amazon, for instance. The wealth of the company is almost fabulous. It does not produce anything. It is just a platform for buyers and sellers to come together. Amazon charges something like 40% from the sellers. It is rent rather than profit. Amazon makes its platform available for use. It is not just the money it earns. It is about the control of a vast amount of data that can be used for influencing us, controlling our thoughts, nudging us to behave in desired ways. Such companies do not like to be taxed, but more so, they are apprehensive of regulation from the government. Little wonder then that the tech bros fell out with the Joe Biden administration in the United States of America which was in favour of tightening regulations, and rallied around Donald Trump, even though the latter is no admirer of science and technology. A new oligarchy is emerging, comprising the owners of the tech companies who can make and break governments. Wealth creation and ownership of the new form of capital — data — are likely to be markedly different from what was observed in the nineteenth and twentieth centuries.
As the use of AI expands in production, in marketing, in the provision of important services in finance, health, and education; employment will perforce be adversely affected. Many jobs will disappear, especially white-collar ones. If Agentic AI and Generative AI transform into Artificial General Intelligence, then job losses will be very heavy. On the other hand, new jobs will open up in the AI sector but that would be for a limited few who are outstanding in terms of intellectual prowess and flexibility in learning about and adapting to frequent changes in the environment. Some jobs will disappear fast, others a bit more slowly. A major socio-economic question would be: what happens to the vast pool of unemployed, or rather unemployable, people?
Despite the ambiguities about the final outcome and the glide path to that destination, AI constitutes an existential risk, along with nuclear annihilation and climate change. AI could well be the last innovation humans achieve — the final solution to all our woes. The innovators are looking for ultimate control of our minds. The new messiahs of a new religion?
Anup Sinha is former Professor of Economics, IIM Calcutta