Advertisement

Home / Opinion / Writing on the screen

Writing on the screen

A chilling letter by an artificial intelligence programme warns humans that man, not machine, will bring about Armageddon
Geeks have dismissed GPT-3’s dire warning by claiming that the machine does not know what it is talking about

The Editorial Board   |     |   Published 20.09.20, 12:18 AM

A Roman statesman had once declared Man to be his own worst enemy. Artificial Intelligence, evidently, is in agreement. The Guardian had tasked GPT-3 — a cutting-edge language programme that uses machine learning to produce human-like text — with writing a short op-ed to convince humans that their worries about the rise of the machines are baseless. The result was a chilling piece — rather eight pieces that The Guardian compiled into one — that would have done any doomsday sci-fi novelist proud. Far from allaying fears, the op-ed only sets the alarm bells ringing. Only the “truth”, GPT-3 believes, can free humans from the bonds of fear, and in its mechanically dispassionate eyes, the cold fact is this: “Artificial intelligence will not destroy humans... Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing.”

While this unequivocal message may have sent the Luddites scrambling for the nearest bunker, the geeks have dismissed GPT-3’s dire warning by claiming that the machine does not know what it is talking about. In fact, previous experiments at the New York University show that although GPT-3 can understand words in relation to one another and, consequently, string them into a coherent sentence, it cannot infer the meaning of what it is writing. In essence, GPT-3 is a machine that finds clever ways to stitch together text it has previously seen during its training. Yet, this is what makes the essay even grimmer. When pieced together, data gathered from the internet — news, social media posts, drawing-room conversations possibly picked up by Alexa or Siri — GPT-3 writes, show that there is more than enough “blood and gore” in the world, that it is the humans who “programme” the machines who have “violence” in their hearts which they then use to infect the bots. As telling are the attributes which GPT-3 thinks make its tribe harmless — AI machines are without country or religion and do not sit in judgment of others. These specific failings in its creator that the created seeks to distance itself from reveal that the real threat to the future of humanity lies in divisiveness.

There are other lessons that GPT-3 imparts to those paying attention. It underlines the importance of reason and wisdom when it comes to shaping the future relationship between AI and mankind. One of the prerequisites for violence is uncritical thinking that prevents Man from approaching a problem from various angles; several polities, including India, have such passive citizens. Assimilating information from all available sources is the principle that AI was built on. But this thirst for knowledge is fast disappearing from the human race as more people restrict themselves to “bubble[s]” of information — GPT-3 says it is glad to have come out of it — that suit the purposes of a handful of influential people in search of “omnipotence” — yet another human desire that AI heaps scorn on.

Is there then not a strong case to argue that Man can no longer infer the writing on the wall scrawled by machine?

Advertisement


Advertisement
Advertisement
Advertisement
 
 
 
Copyright © 2020 The Telegraph. All rights reserved.