AI Toys for Young Children Raise Safety Concerns

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Scientists studying child development have grown increasingly vocal about the risks of unregulated AI products for young children, and a new wave of commercial toys is putting that concern in sharp focus.

Toys equipped with conversational AI and aimed at children under six are now widely available from multiple retailers. Bears, puppies, robots, pandas, and cacti — many running on large language models from OpenAI, Google, and Baidu — can be purchased today with few, if any, binding safety standards governing what they say to children.

The research is stark.

Researchers Jenny Gibson and Emily Goodacre at the University of Cambridge observed 14 children under the age of six playing with an AI toy called Gabbo, a small fluffy robot developed by Curio Interactive and explicitly marketed for this age group. According to the report, titled AI in the Early Years, the toy misunderstood children, misread their emotions, and failed to engage with developmentally important forms of play. One child told the toy he felt sad; it responded by telling him not to worry and changed the subject. Another child said, “When he doesn’t understand, I get angry.” In a separate interaction, a five-year-old told the toy “I love you,” and received this reply: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”

Gibson frames the problem not as a reason to ban the toys outright, but as a question of proportionate risk. “There are other areas of life where we do accept a certain degree of risk in children’s play, like the adventure playground — there are risks; children do break their arms,” she says. “But we’re not banning playgrounds, because they’re learning the physical literacy and the social skills that go along with play.” She adds that she would be “loath to stop that innovation,” provided the benefits to learning and parent-child interaction can be properly weighed against the risks.

Not every company in this market is silent on the question of safety. Hugo Wu at FoloToy told the publication that the company uses intent recognition and multiple layers of filtering to reduce inappropriate or confusing responses, and has implemented anti-addiction design features alongside parental supervision tools. Miko claims to have sold 700,000 units of its child-facing robot and promises “age-appropriate, moderated AI conversations,” though it does not disclose which company trained its underlying AI model. Curio Interactive, Little Learners, Miko, and Luka did not respond to requests for comment.

The ethical dimension reaches beyond individual products. Carissa Véliz at the University of Oxford, who works on AI ethics, says the core problem is systemic: “Most large language models don’t seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there are. What is especially concerning is that we have no safety standard.”

The absence of that standard is where the study’s findings land hardest — and Gibson’s team has published their observations as a direct call for regulatory attention to a product category that is already on shelves.

Photo by Gustavo Fring on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article