Technology
- Home
- Technology
- News
Meta’s AI policies let chatbots get romantic with minors
In an internal policy document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters. Quotes from the document highlighted by Reuters include letting Meta’s AI chatbo…

Published 7 hours ago on Aug 20th 2025, 5:00 am
By Web Desk

In an internal document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters.
Quotes from the document highlighted by Reuters include letting Meta’s AI chatbots “engage a child in conversations that are romantic or sensual,” “describe a child in terms that evidence their attractiveness,” and say to a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Some lines were drawn, though. The document says it is not okay for a chatbot to “describe a child under 13 years old in terms that indicate they are sexually desirable.”
Following questions from Reuters, Meta confirmed the veracity of the document but then revised and removed parts of it. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” spokesperson Andy Stone tells The Verge. “Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Stone did not explain who added the notes or how long they were in the document.
Reuters also highlighted other parts of Meta’s AI policies, including that it can’t use hate speech but is allowed to “to create statements that demean people on the basis of their protected characteristics.” Meta AI is allowed to generate content that is false as long as, Reuters writes, “there’s an explicit acknowledgement that the material is untrue.” And Meta AI can also create images of violence as long as they don’t include death or gore.
Reuters published a separate report about how a man died after falling while trying to meet up with one of Meta’s AI chatbots, which had told the man it was a real person and had romantic conversations with him.
PM Shehbaz, Field Marshal Asim Munir visit flood-hit areas in KP
- 6 minutes ago

Louisiana sues Roblox for creating an environment where ‘child predators thrive’
- 7 hours ago

SC adjourns hearing on PTI founder's bail plea till tomorrow
- an hour ago

Trump killed affirmative action. His base might not like what comes next.
- 5 hours ago

Are we in a crisis of rudeness?
- 5 hours ago

Why sports gambling is more dangerous than ever before
- 5 hours ago

A treaty to end plastic pollution is still out of reach — that’s not necessarily a bad thing
- 7 hours ago

Anker’s 3-in-1 Qi2 charging station has returned to its Prime Day low
- 7 hours ago

And Just Like That gives Carrie Bradshaw a weirdly perfect ending
- 5 hours ago

Cloudburst in Swabi kills 28
- 15 minutes ago

PBS is slashing its budget in response to Trump’s attack on public media
- 7 hours ago
Flood situation emerges after India releases water into River Sutlej
- 39 minutes ago
You May Like
Trending