ADVERTISEMENT

Cambridge panel urges stricter rules for generative AI toys interacting with kids

Researchers find talking toys may misunderstand speech ignore emotions and confuse voices raising concerns about child safety privacy and emotional development

Representational picture AI generated

G.S. Mudur
Published 13.03.26, 05:38 AM

A University of Cambridge-led expert panel has called for tighter regulations and safety standards for toys powered by generative artificial intelligence (GenAI) that can talk to children, warning that they may misread emotions, even while it acknowledged their perceived benefits.

The recommendations stem from what researchers have described as the first structured scientific observations of young children interacting with GenAI toys that can hold human-like conversations, exploring how they might influence development in the critical years up to age five.

ADVERTISEMENT

Some toymakers over the past three years have begun embedding GenAI, which can generate human-like text and conversation, in interactive toys that can engage with children and respond to questions.

The researchers who surveyed early years child development practitioners noted perceptions that GenAI toys could support certain aspects of development such as language and communication skills. But they also found that GenAI toys struggle with social and pretend play, misunderstand children and react inappropriately to emotions.

“We noticed lots of miscommunications between children and the AI toy observed,” Emily Goodacre, a research associate at the University of Cambridge faculty of education, who led the research, told The Telegraph.

The researchers found that the GenAI toy sometimes ignored children’s interruptions, mistook parents’ voices for the child, and failed to respond to apparently important statements about feelings. Several children became visibly frustrated when it seemed not to be listening.

In one instance, when a three-year-old told the toy “I’m sad”, it misheard and replied: “Don’t worry. I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” The researchers say such a response could have been interpreted by a child that the sadness was unimportant.

“When we looked at the toy’s transcript afterwards, we noticed that the toy had not heard the child. Instead, it had heard the parent’s response to the child asking: “You’re sad?” and had responded to the parent’s voice,” Goodacre said. “In this case, the miscommunication meant that the child’s feelings were dismissed.”

She said toy developments must improve social and emotional responses to prevent such miscommunication.

Goodacre and her colleague Jenny Gibson, professor of development psychology at Cambridge, said there is little scientific evidence on GenAI toys for children under five years, even though they are commercially marketed.

“There was little or no evidence regarding possible impacts of these toys on young children beyond considerations of AI literacy,” they wrote in their report released on Friday. “Additionally, a technology’s design may not align with its application, possibly leading to unintended outcomes even when toys are designed with educational benefits in mind.”

The researchers have recommended regulations to ensure psychological safety by limiting toys’ ability to affirm friendship and other sensitive relational areas with young children and to ensure labelling standards to clearly communicate toy features to consumers.

“The regulation of AI toys could take different forms — we would welcome societal discussions about what they might look like,” Goodacre said. “This could mean requirements for toy companies about the way they develop, test, and advertise their toys. For example, we recommend that there should be regulations around toys affirming friendship to young children. Product labels should clearly communicate both toy features and toy policies, which will help parents make decisions about which AI toy to buy, if any.”

The panel’s guidelines for parents — Ask these questions:

Artificial Intelligence (AI) Cambridge University
Follow us on:
ADVERTISEMENT