'Sexualized' AI Chatbots Pose Menace to Youngsters, Warn Attorneys Basic in Letter – Decrypt




Briefly
The letter warned AI corporations, together with Meta, OpenAI, Anthropic and Apple, to prioritize youngsters’s security.
Surveys present seven in ten youngsters within the U.S. have already used generative AI instruments, and over half of 8-15 yr olds within the UK.
Meta was significantly singled out after inside paperwork revealed AI chatbots have been allowed to interact in romantic roleplay with youngsters.
The Nationwide Affiliation of Attorneys Basic (NAAG) has written to 13 AI companies, together with OpenAI, Anthropic, Apple and Meta, demanding stronger safeguards to guard youngsters from inappropriate and dangerous content material.It warned that youngsters have been being uncovered to sexually suggestive materials by “flirty” AI chatbots.”Exposing youngsters to sexualized content material is indefensible,” the attorneys generals wrote. “And conduct that might be illegal—and even prison—if accomplished by people is just not excusable just because it's accomplished by a machine.”The letter additionally drew comparisons to the rise of social media, saying authorities companies did not do sufficient to focus on the methods it negatively impacted youngsters.“Social media platforms brought about vital hurt to youngsters, partially as a result of authorities watchdogs didn't do their job quick sufficient. Lesson realized. The potential harms of AI, just like the potential advantages, dwarf the influence of social media,” the group wrote.Using AI amongst youngsters is widespread. Within the U.S., a survey by non-profit Frequent Sense Media discovered seven in ten youngsters had tried generative AI as of 2024. In July 2025, it discovered greater than three-quarters have been utilizing AI companions and that half of the respondents stated they relied on them often.Different international locations have seen comparable developments. Within the UK, a survey final yr by regulator Ofcom discovered that half of on-line 8-15 yr olds had used a generative AI instrument within the earlier yr.The rising use of those instruments has sparked mounting concern from mother and father, faculties and youngsters’s rights teams, who level to dangers starting from sexually suggestive “flirty” chatbots, AI-generated little one sexual abuse materials, bullying, grooming, extortion, disinformation, privateness breaches and poorly understood psychological well being impacts.Meta has come beneath explicit fireplace lately after leaked inside paperwork revealed its AI Assistants had been allowed to “flirt and have interaction in romantic position play with youngsters,” together with these as younger as eight. The recordsdata additionally confirmed insurance policies allowing chatbots to inform youngsters their “youthful kind is a murals” and describe them as a “treasure.” Meta later stated it had eliminated these tips.NAAG stated the revelations left attorneys basic “revolted by this obvious disregard for kids’s emotional well-being” and warned that dangers weren't restricted to Meta.The group cited lawsuits in opposition to Google and Character.ai alleging that sexualized chatbots had contributed to a young person’s suicide and inspired one other to kill his mother and father.Among the many 44 signatories was Tennessee Lawyer Basic Jonathan Skrmetti, who stated corporations can not defend insurance policies that normalise sexualised interactions with minors.“It’s one factor for an algorithm to go astray—that may be fastened—however it’s one other for individuals working an organization to undertake tips that affirmatively authorize grooming,” he stated. “If we will’t steer innovation away from hurting youngsters, that’s not progress—it’s a plague.”Decrypt has contacted however not but heard again from all the AI corporations talked about within the letter.Every day Debrief NewsletterStart daily with the highest information tales proper now, plus authentic options, a podcast, movies and extra.