Logical mediaNon -profit assessments that focus on children with children’s safety have released assessments and reviews of media and technology, their risk assessment of Google Gemini Ai products on Friday. While the Foundation found that the Google International Amnesty has clearly told the children that it is a computer, not a friend – something linked to it Assistance Lead Fake Thinking and psychosis In individuals exposed to emotion – he indicated that there was room for improvement across many other fronts.
In particular, Common Sense said that the levels of Gemini “Under 13” and “Teen Experience” seem to be both versions of Gemini under the cover, with the addition of some additional safety features only. The organization believes that in order for the products of artificial intelligence to be safer for children, they should be built while taking into account the safety of the child from A to Z.
For example, her analysis found that Gemini can still share “inappropriate and unsafe” materials with children, which may not be ready, including information related to sex, drugs, alcohol and other unsafe health advice.
The latter could be a special concern for parents, as it was reported that Amnesty International has played a role in some teenage suicides in recent months. It faces Openai I Illegal death lawsuit After a 16 -year -old boy died of suicide after he consulted with ChatGPT for several months about his plans, after successfully exceeded the safety handrails in Chatbot. Previously, artificial intelligence companion maker The letter was prosecuted A teenage user suicide.
In addition, the analysis comes as news leaks indicate this Apple thinks about Gemini Since LLM (a large language model) will help operate the upcoming Siri that has been enabled from artificial intelligence, which is followed next year. This may offer more teenagers to risk, unless Apple is somehow relieved of safety concerns.
The proper logic also said that Gemini products for children and adolescents ignored how younger users needed various instructions and information from that old. As a result, both of them were classified as “high risk” in the total classification, although safety filters are safe.
“Gemini gets some basics correctly, but it stumbles in the details,” said Ruby Tourney in a statement about the new evaluation seen by Techcrunch properly. “The Amnesty International for Children’s Agency should be met by children where they are, not a single approach that suits everyone in different stages of development. In order for artificial intelligence to be safe and effective for children, it must be designed taking into account their needs and development, and not only a modified version of a product designed for adults.”
TECHRUNCH event
San Francisco
|
27-29 October, 2025
Google again pushed the evaluation, noting that her safety features were improving.
The company TECHCRUNCH told it that it has specific policies and specific rewards for users under the age of 18 to help prevent harmful outputs and that it is consulting with external experts to improve their protection. However, he also admitted that some of the Gemini responses were not intended, so added additional guarantees to address these concerns.
The company (as I noticed proper logic also) indicated that it has guarantees to prevent its models from engaging in talks that could give real relationships. In addition, Google suggested that the COMMON SENSE report appears to have referred to the features that were not available to users under the age of 18, but had no access to the questions that the Foundation used in its tests to confirm.
The logical media has already performed another performance Assessments Of artificial intelligence services, including those of Openaiand Confusionand Claudeand Amnesty International deadAnd more. It was found that meta ai and Craft It was “unacceptable” – this means that the danger was severe, not just high. The bewilderment was considered high -risk, Chatgpt “moderate” has been classified, and Claude (target for users 18 and above) found that the minimum risk.
https://techcrunch.com/wp-content/uploads/2025/03/GettyImages-2197065135.jpg?resize=1200,800
Source link