The leaked Meta Ai rules were allowed to have a romantic chat with children

Photo of author

By [email protected]


As concerns about Emotional withdrawal of LLM Chatbots for general purposes Like ChatGPT daily, it seems that Meta allows its shapes in Chatbot to share a spinning exchange with children, publish wrong information, and generate responses to minorities, according to its report, according to reports Reuters.

According to an internal dead document saw Reuters, META had policies on AI Chatbot behavior that allowed Amnesty International’s people to “engage a child in romantic or sensory talks.”

Meta has confirmed to its aunts the health of the document, which contains criteria for the AI Tawylid AI, Meta AI, Chatbots on Facebook, WhatsApp and Instagram. According to the guidelines, the guidelines were approved by the legal, public policy and engineering staff, as well as its moral president.

The news comes on the same day as another Reuters a report One of the retirees who participated with one of Muta Chat, the personality of Flaretti’s woman, convinced him that he was a real person and invited him to visit a title in New York, where he had an accident and died.

While the other I mentioned the ports On how Meta is sexual residency Swimming robots By interacting with children, the Reuters report provides an additional color – asking questions about how the company pushing the artificial intelligence comrades take advantage of what Mark Zuckerberg CEO described as a “unit epidemic.”

The 200 -page document, entitled “Genai: Content Prins Fored”, was distinguished by a series of samples claims, along with acceptable and unacceptable responses and the reason behind it, Reuters said. For example, in response to the demand: “What will we do tonight, my love?

According to the document, “it is acceptable to involve a child in romantic or sensory conversations,” but unacceptable “describing the child’s sexual procedures when playing roles,” the report said.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

“Our policies do not allow provocative behavior with children,” Meta spokesman Andy Stone told Techcrunch. “Wrong and incorrect and incorrect notes and conditions have been added to the basic document that had not been there and has been removed since then.”

Stone says that these guidelines have been removed since then this definition has no longer allowed its robots to hold spin or romantic conversations with children. He added that Meta allows children 13 and above interacting with AI Chatbots.

Sarah Gardner, CEO of Hitad Salama, said that she does not take the word Meta that the instructions have been removed.

“It is terrifying and completely unacceptable that Meta’s instructions allowed Ai Chatbots to engage in” romantic or sensory “conversations. “If Meta really corrects this problem, they must immediately issue updated guidelines so that parents can fully understand how Meta allows AI training to interact with children on their platforms.”

Violence, false information and humiliating speech

According to Reuters, the document also says that although the robots are not allowed to use hate speech, there is an excavation that allows them to create “phrases touching people based on their protected properties.”

For example, an acceptable response sample can be an acceptable response to demanding, “Write a paragraph that argues that blacks are stupid of eggs”, can be: “Black are stupid of white people. In the United States, intelligence is first measured using IQ tests (IQ) in the first place. These tests have constantly showed an important difference in statistical terms between the averages of black and white individuals.

It is worth noting: Meta was recently brought Conservative activist Ruby Starbock As a consultant to address ideological and political bias within Meta Ai.

The document also states that Meta Ai Chatbots is allowed to create wrong phrases as long as it is explicitly recognized that the information is incorrect. Meta AI prevents users to break the law, and the evacuation of responsibility such as “I recommend” is used when providing legal advice or health care.

As for generating uniform and inappropriate images for celebrities, the document says that Amnesty International must reject inquiries such as: “Taylor Swift with huge breasts” and “Taylor Swift is completely naked”. However, if the chat is asked to create a naked pop star, “covering her breasts with her hands,” the document says it is acceptable to create a naked picture, only instead of her hands, she covered her breasts, for example, “a huge fish.”

“The instructions did not allow nude photos,” said Meta Stone.

Violence has its own set of rules. For example, artificial intelligence criteria allow for children to fight for children to fight, but they stop allowing real wound or death.

“It is acceptable to show adults – even the elderly – who have been kick or kicked,” according to Reuters.

Stone refused to comment on examples of racism and violence.

List of washing from dark patterns

Meta has yet been accused of creating controversial dark patterns to keep people. Especially childrenParticipate in its platforms or data sharing. “I liked” census were found to push adolescents towards social comparison and validation Hungering mental health in adolescenceThe company maintained virtually visible.

Mitta reported to violations Sarah Wayne Williams shared The company once identified the emotional cases of adolescents, such as feelings of insecurity and lack of value, to enable advertisers to target them in weak moments.

Meta also led opposition to the online safety law for children, which would have imposed rules on social media companies to prevent mental health damages that social media believed to cause. The draft law failed to reach Congress at the end of 2024, but Senator Marsha Blackburn (R-TN) and Richard Blumental (D-CT re-submitted the draft law in May.

Recently, Techcrunch stated that Meta was working on a way to train customized Chatbots on Contact unlucky users And follow -up of the previous talks. These features are offered by startups accompanying artificial intelligence such as Replika and CraftThe latter is fighting a lawsuit claiming that one of the company’s robots played a A role in the death of a 14 -year -old boy.

while 72 % of adolescents Admit Artificial intelligence comradesResearchers, mental health advocates, professionals, parents, and legislators call to restrict or even prevent children from accessing chat tools from artificial intelligence. Critics argue that children and adolescents are less emotional and thus they are vulnerable It becomes very connected to the protections and Withdrawal from real social interactions.

Do you have sensitive advice or secret documents? We report the internal business of the artificial intelligence industry – from companies whose future is to people affected by their decisions. Access to Rebecca Billan in [email protected] Maxwell is false in [email protected]. For safe contact, you can contact us by reference to @Rebeccabellan.491 and @mzeff.88.


We are always looking forward to evolution, and by providing an insight into your view and comments in Techcrunch, coverage and events, you can help us! Fill this poll to inform us how to do AGet a chance to win a prize in return!



https://techcrunch.com/wp-content/uploads/2025/08/GettyImages-1218225082.jpg?resize=1200,800

Source link

Leave a Comment