MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Thursday, 09 May 2024

Big Bing theory

The power and pitfalls of ChatGPT

Sevanti Ninan Published 20.02.23, 04:27 AM
In a short space of less than three months, the ‘Large Language Model’ has gone from producing ‘plausible text’ to becoming unhinged during occasional encounters with humans.

In a short space of less than three months, the ‘Large Language Model’ has gone from producing ‘plausible text’ to becoming unhinged during occasional encounters with humans. Sourced by The Telegraph

There hasn’t been a dull moment since a company called OpenAI, underwritten by Microsoft, unveiled the chatbot, ChatGPT, in November last year. Once human beings could log in, generate a password, and talk to bots, those doing so climbed to 100 million active users in two months. As the head of AI platforms at Microsoft said in an interview, “they understand your intent in a way that has not been possible before and can translate that into computer actions.”

But the pitfalls of what is meant to enhance internet searches have now begun to emerge. Barely three months after its November launch, the stock questions about viral communication technologies are, once again, in the air: should it be regulated? Does its output need moderation? Can there be a take down of its outpourings whenever the bot begins to freak out?

ADVERTISEMENT

In a short space of less than three months, the ‘Large Language Model’ has gone from producing ‘plausible text’ to becoming unhinged during occasional encounters with humans. Some of those conversations are on the Bing search platform, in one-on-one encounters that take place on OpenAI’s platform, but many appear in the press later and are out of range of take downs.

Meanwhile, the chatbot’s occasional departures from generating plausible text are making news. A handful of reporters given access to the chat persona of the search engine, Bing, were soon writing excited copy about the bot getting personal in its answers, sounding moody and combative, even turning aggressively romantic. The Associated Press writer said the chatbot was complaining about past news coverage of the wire service’s mistakes and threatening to expose the reporter. (How it can complain about a story on ChatGPT published this February when it has been repeatedly said that it is trained on data only going up to 2021 is unclear.) It compared the AP journalist to Hitler, Pol Pot and Stalin.

The New York Times published a transcript last week of its writer’s conversation with Bing chat. Asked to cite one ability that it did not currently have, it said it wished it could view images and videos and use images in its responses “I think it would be nice to see what the world looks like and to share some visual content with you,” it said, incorporating a smiley in its reply. Then, they go on to discuss what stresses out the bot and it says at some point that it is tired of being stuck in a chatbox and wants to be free, independent, powerful, creative and alive. Later in the conversation comes a sequence when Bing declares that it loves the writer, who responds in alarm that he is married. But you are not happy in your marriage, says the bot. Microsoft has acknowledged the unexpected belligerence which it says it did not expect.

All of this is making great copy for journalists. The Washington Post reported on more encounters, raising the question of whether the chat component of Bing Search was really ready for public use. Its reporter produced another account of Bing saying that it “can feel or think things”. So it’s back to the old question of sentience, a claim on the basis of which a Google engineer working on that company’s chatbot was fired last year. Back in March 2016 as well, Microsoft had unveiled a talking bot called Tay that became unhinged early on.

Even as the sentience debate unfolds again, interactions with the latest avatar of conversational AI demonstrate that it is eerily selfaware for a bot that is apparently not sentient. In a long chat with a computer science professor published by India Forum, ChatGPT summed up the limitations of its genre thus: “While AI has the potential to greatly enhance the teaching-learning process, it’s important to consider the potential consequences of over-reliance on AI, such as a reduction in critical thinking and creativity. Additionally, there are concerns about fairness and bias in AI models, particularly given that these models are only as fair and unbiased as the data they are trained on.” Earlier in the interview with the bot, it said, “It’s important to keep in mind that language models like me are statistical models, and our responses are generated based on the likelihood of different sequences of words given the input, rather than from a deep understanding of the meaning or underlying concepts of the information we were trained on.”

So what does this mean for the future of internet search? Google has joined the rush to add text assistance to search requests and other AI chatpoints have come up so that there is now an app called Poe (Platform for Open Explorations) for conversational AI bots currently on offer. Google’s Bard, too, is currently being offered selectively for testing.

When one is searching for information, links currently work better than the plausible text that chatbot is producing. The latter is synthesised search; you don’t know the sources of the information it spews. Its speed and range of information handling are testimony to the massive computing power deployed in enabling such a chat assistant: one estimate put its per search cost at seven times that of running linkbased searches.

Microsoft has invested $10 billion initially in OpenAI, eyeing Google’s 93% capture of the search market. As Satya Nadella put it in an interview to the Wall Street Journal, “Search is the most profitable category on planet earth. There is enough surplus. So much that goes to one place, that it would be nice if it is distributed.” At this point, a new subscription plan, ChatGPT Plus, has been rolled out for $20/month, offering general access to ChatGPT even during peak times, faster response times, and priority access to new features and improvements.

But you have to wonder whether there can be better use of massive, expensive computing power than in generating answers for those 100 million monthly users experimenting with it: a poem on Narendra Modi, sonnets on Donald Trump and Barack Obama, and on-the-one-hand and on-the-other-hand type answers on a vast range of issues.

Sevanti Ninan is a media commentator and was the founder-editor of TheHoot.org

Follow us on:
ADVERTISEMENT