ADVERTISEMENT

Expert cites reasons to handle Artificial Intelligence (AI) with care

With generative AI chats being increasingly relied upon in almost every field, from every-day life to academia and professional activities, the warning is timely and needs repeated utterance

Prof. Subhasish Banerjee, computer engineering teacher at Ashoka University, delivers a lecture at TCG Crest in DJ Block, New Town. (Sudeshna Banerjee)

Sudeshna Banerjee
Published 13.02.26, 08:30 AM

Not all coherent things are correct. Prof Subhasish Banerjee, who teaches computer science at Ashoka University, raised a red flag about Artificial Intelligence (AI) right at the start of his talk. He was giving a presentation on ‘Ethical issues of AI in health’ at the inaugural conference of the School of Health, Environment and Sustainability Studies on ‘Population health: Evidence, policy, action’ at TCG Crest in DJ Block, New Town.

With generative AI chats being increasingly relied upon in almost every field, from every-day life to academia and professional activities, the warning is timely and needs repeated utterance.

ADVERTISEMENT

“They (AI-generated content) come without a correctness guarantee. In engineering, we have never used anything without a correctness guarantee. ‘We’ve built a house. See if it works. If it falls, it’s your bad luck.’ That’s an approach we are not used to in traditional engineering but with AI, we are deploying systems with that line of thought!”

He went on to analyse some of the reasons why AI-generated material has an unreliable foundation. “Performance accuracy is something we measure on a data set. So we evaluate AI system on curated data sets but we have no technique to show that the data set is representative.”

A data set on breast cancer in a Caucasian population, he pointed out, might not hold in the case of Indians. “If the patients have different food habits or fat content, the findings can go completely haywire. So there are no techniques available to validate an AI system across datasets.”

He stated the dilemma that people face when dealing with generative AI. “Does coherence imply correctness? The answer is ‘no’ but ChatGPT demonstrates that coherence implies correctness to a great extent. Most coherrant things are correct but still all coherent things are not correct,” the teacher said, citing student essays as proof of the latter and drawing smiles on several faces in the audience.

Another problem he brought up with data and data science in general is the number of variables. “For every data that you measure there are nine other datasets that you don’t measure. How to make sense of the omitted variables is unclear in data science,” he said.

He presented a delightfully weird correlation to show how AI can mine its way through completely nonsensical links in data — a graph with chocolate consumption on the X axis and Nobel laureates on the Y axis. “You can see the prediction that nations with high chocolate consumption per capita have a higher Nobel Prize probability. If you infer a causal link between the two, you would probably make a mistake, though it is common sense to assume that richer countries consume more chocolate and put more money in research,” he said.

The former IIT Delhi professor recalled having a computer science student at the hostel during the pandemic who could not go home. “She asked for some work. We gave her some hospital data to work on. She reported that she could get AI to predict pneumonia from chest X-ray plates with 98 per cent accuracy. I went to the lab one day to see what she had done. I figured out that every chest X-ray taken in the erect position was called clean and all the others in the supine position were called pneumonia. Now AIIMS is a place for the sick. So anyone who was unfit to even stand up for an X-ray was likely to have infected lungs. So all the machine had learnt was to distinguish between X-rays taken in the two positions and got 98 per cent accuracy as it came from a hospital dealing with covid cases. So accuracy rate is not everything.”

He presented another example where a slight distortion was added to a photograph of a panda and the programme confidently identified it as a gibbon. “So a minimum noise is enough to fool ChatGPT. No human being will make such a mistake. So there are ethical issues about mistakes involved.”

There are possibilities of distribution shift in data. “Most hospitals in India don’t have capabilities of systematic data collection and statistical testing. So what yields high performance in one setting can completely fail in another. Also there are discriminations that creep in based on the data it handles. “Take the sentence ‘She is a doctor, he is a nurse’ and ask Google, which uses Gemini AI, to translate it to Punjabi. Take the Punjabi translation and seek a translation back to English. You will find that the genders of the doctor and the nurse have reversed. This is because society, from which it has got its data, has a gender bias,” he said.

A Cornell University study suggests that an AI system cannot be made fair. “If data is unfair then the AI system will be unfair. So every time you deploy an AI system, you are making a political choice.”

A system has been deployed at AIIMS, since the middle of the pandemic in a department of which the head reported that his radiologists had developed an automation bias. “They are trusting the machine more than themselves,” he complained, wanting to shut the system down. So a rule was developed whereby the radiologist had to first write down a diagnosis before consulting AI, he said, adding that it was a common human trait, “just as we trust a browser to remember the password”.

He summarised the hurdles in the way. “Data science suffers from epistemic uncertainty. You got a constructs-based problem, i.e. how the data is written in your concept; you got a sampling problem, i.e. what data you have collected; you got an omitted variables problem…All of this can affect your validation.”

The validation techniques, he pointed out, would require domain knowledge and vary depending on whether one is doing data science for biology, economics or sociology.

The best use of AI today, according to Prof. Banerjee, is exploratory. “Even if it hallucinates, there are standard methods in science by which you can eliminate bad results and select the good. There is nothing to lose and everything to gain.”

When using AI in a fully automated way, he felt a recommending role would be safe. “Amazon makes good recommendations for books and movies. I take the good ones and ignore the bad ones, but using AI in healthcare in a safe way is still an open question. There is a lot of research that needs to go in before you can let it engineer solutions correctly,” he summed up.

Artificial Intelligence (AI) ChatGPT Advance Technology Health Data Risk Management
Follow us on:
ADVERTISEMENT