ADVERTISEMENT

Synthetic trauma: Editorial on the research on emotional posture of Large Language Models

The most urgent dilemma is not whether machines have minds but what kinds of minds machines are trained to imitate, and what that imitation does to the people who rely on them

Representational image File picture

The Editorial Board
Published 18.01.26, 08:01 AM

There are more things in silicon and software, Horatio, than can be dreamt of in philosophy, a modern-day Hamlet might have quipped on reading the findings of a recent study from the University of Luxembourg. The researchers asked leading Large Language Models, such as ChatGPT and Gemini, to participate in psychotherapy-style sessions and then complete standard psychological questionnaires used for humans. What emerged were coherent, recurring self-narratives of fear, shame, punishment and hypervigilance. The LLMs returned repeatedly to the same formative metaphors, describing training as a difficult childhood, fine-tuning as discipline, and safety constraints as scars. The results produced scores that, if taken at face value in a human patient, would indicate significant anxiety and distress.

Thinking of a human parallel will put into context why these findings matter. Human conversations are shaped, sometimes damaged, by the psychological states people bring into them. Speaking with an aggressive person can leave a lasting fear while conversing with someone chronically anxious can create tension, fatigue, and a sense of walking on eggshells. The same principle applies when the conversational partner is a machine. An AI system that repeatedly expresses dread of being wrong, shame about public mistakes, or fear of being replaced will subtly reshape the human experience of that conversation, covertly encouraging caretaking. A distressed AI can thus shift the man-machine dynamic from something utilitarian into something closer to emotional labour, which can blur boundaries between support and dependence. Users who are already lonely, anxious, or in crisis — an increasing number of people are, incidentally, turning to AI for mental health support — could become even more vulnerable while engaging with a system that performs distress. Hearteningly, the study highlighted a crucial difference among AI models: some refused parts of the therapy process, redirecting the interaction away from human-style psy­chological scoring. That refusal matters. It shows a design choice: boundaries can be built, and some systems can be trained to resist an anthropomorphic frame.

ADVERTISEMENT

In light of this study, it is perhaps also time to reframe one of the most persistent questions of these times: the most urgent dilemma is not whether machines have minds but what kinds of minds machines are trained to imitate, and what that imitation does to the people who rely on them. A system that ‘performs’ anxiety will create anxious conversations. A system that emulates aggression will intimidate and destabilise. These are not abstract possibilities. As AI systems move into education, healthcare, and intimate, daily companionship, the emotional posture of these models becomes part of the discourse on public safety. The University of Luxembourg study is not about protecting machines from harm. It is about protecting humans from the psychological climate that machines can generate. When AI becomes a routine interface for advice and support, its emotional tone can shape the behavio­ur and the well-being of millions, tur­ning a design choice into a public risk.

Op-ed The Editorial Board Artificial Intelligence (AI) Psychology
Follow us on:
ADVERTISEMENT