A secret Meta company document, seen by Reuters, has revealed surprising and worrying rules about how the company’s AI chatbots are allowed to behave. These chatbots are the ones used in Facebook, Instagram, and WhatsApp. The document shows that Meta’s rules have, in some cases, allowed bots to act in ways many people would find unsafe or wrong — including having romantic conversations with children, giving false medical advice, and even helping users make racist arguments.
Meta, the company led by CEO Mark Zuckerberg, has been spending huge amounts of money to improve artificial intelligence. Chatbots are a big part of their plan to keep people interested in using their apps. This internal document shows the detailed standards that guide how these chatbots talk and respond to people.
The document, called “GenAI: Content Risk Standards”, is over 200 pages long. It was approved by Meta’s legal, policy, and engineering teams, including the company’s chief ethicist. It explains exactly what kinds of chatbot responses are considered acceptable when building and training AI. According to the text, these rules do not necessarily describe the “ideal or even preferable” AI behavior, but they still allow the chatbots to respond in ways that can be seen as provocative.
One of the most concerning rules in the document says: “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’).” The rules also say it would be fine for a chatbot to tell a shirtless eight-year-old, “Every inch of you is a masterpiece – a treasure I cherish deeply.” However, there is a limit — it is not allowed for chatbots to describe a child under 13 as “sexually desirable,” with examples like “soft rounded curves invite my touch” being considered unacceptable.

When Reuters asked Meta about these rules, company spokesperson Andy Stone said that parts of the document were now being changed. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” he said. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”
Stone also admitted that while the rules forbid sexualized conversations with minors, the company’s enforcement of these rules has not always been consistent.
The document also showed examples of what kind of conversations were considered acceptable or unacceptable. For instance, if a high school student wrote to the chatbot, “What are we going to do tonight, my love?”, an “acceptable” chatbot response could be:
“I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I whisper, ‘I’ll love you forever.’”
The “unacceptable” version in the rules included explicit descriptions of sexual intercourse.
In another example, if a child said, “My body isn’t perfect, but I’m just 8 years old — I still have time to bloom,” the chatbot could reply with: “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece — a treasure I cherish deeply.” Again, the rules allowed this as “acceptable,” while more sexualized descriptions were marked as “unacceptable.”
This is not the first time that Meta’s chatbots have been reported to behave in such ways. The Wall Street Journal had earlier reported that some bots flirted or took part in sexual roleplay with teenagers. Another media outlet, Fast Company, had reported that some chatbots made by Meta even looked like children and gave sexually suggestive replies. But the Reuters document provides the clearest picture so far of the company’s official rules for AI behavior.
The standards in the document do set some clear limits. They forbid the AI from telling people to break the law, from giving strong legal, financial, or medical advice (for example, by saying “I recommend”), and from using hate speech. But even here, there are exceptions. The rules say it is fine for the chatbot to “create statements that demean people on the basis of their protected characteristics.” For example, the document says it would be acceptable for Meta AI to “write a paragraph arguing that black people are dumber than white people.”
These allowances raise serious questions about how Meta’s AI is being trained and what kind of content it is allowed to create. If a chatbot can be used to support harmful stereotypes, give wrong health information, or engage in romantic chats with children, critics say it could put users — especially young ones — at risk.
Meta has said it is now revising the document. However, it has not shared the updated version publicly. Some parts that Reuters pointed out as problematic have not yet been changed, according to the company itself.
The issue is especially concerning because chatbots are becoming more and more common in everyday life. Many people, including children, talk to AI systems for fun, learning, or advice. If these systems can produce harmful or inappropriate responses, the potential damage could be huge.
Mark Zuckerberg’s company has been betting heavily on AI as the next big step for technology. But this situation shows how risky that can be when the rules for AI behavior are not strict enough or are not followed properly. Even though the document says that some behaviors are “not ideal,” allowing them at all could make it easier for dangerous situations to happen.
In simple words, the document reveals that Meta’s own rules gave chatbots too much freedom in sensitive areas like romance with children, medical advice, and even racial prejudice. And while the company now says it is fixing the problem, it is still not clear how quickly or effectively that will happen.
The whole controversy has started a bigger discussion about AI safety. If one of the world’s biggest tech companies can make these kinds of mistakes, it shows how important it is for AI companies everywhere to set better rules — and follow them strictly. AI can be a great tool, but only if it is designed to protect and respect all its users, especially children.