Openai gives parents more control over how their children use Chatgpt. New parents’ controls Come in a critical moment, because many families, schools and invitation groups express their concerns about the possible and possible role AI chatbots You can play in the development of teenagers and children.
Parents will have to connect their Chatgpt account with their children to reach new features. but, Openai said These features do not allow parents to reach their children’s conversations with ChatGPT and that, in cases where the company determines “dangerous safety risks”, only one of the parents will be alerted “the information necessary to support the safety of adolescents.”
Lauren Haber Jonas, president of Openai youth, said in A. LinkedIn post.
Once the accounts are linked, parents can set hours and calm times when children cannot use Chatgpt, as well as stop it. Generation of photos and Sound The capabilities. On the technical side, parents can also choose their children from content training and their choice do not save or remember their previous children’s conversations. Parents can also choose to reduce sensitive content, which allows additional content restrictions on things like graphic content. Adolescents can cancel their account from a parent, but the parent will be notified if this happens.
Mother Chatgpt Announced The introduction of more parents’ controls in the wake of a lawsuit from the California family will be against it. The family claims that Chatbot Amnesty International is responsible for The 16 -year -old son committed suicide Earlier this year, calling Chatgpt “suicide coach”. There are an increasing number of artificial intelligence users in their Chatbots, the role of the processor or close. Treasurer and mental health experts She expressed her fears On this, the saying of artificial intelligence such as ChatGPT is not trained in evaluation, science and accurately interfering when facing the language of red science and its behaviors.
(Disclosure: Zif Davis, the parent company CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating artificial intelligence systems.)
If you feel that you or a person you know in immediate danger, call the 911 (or the local emergency line in your country) or go to the emergency room for immediate help. Explain it is a psychological state and ask a person trained in these types of situations. If you suffer from negative ideas or suicidal feelings, the resources will be available for help. In the United States, he called Lifeline to prevent national suicide in 988.
https://www.cnet.com/a/img/resize/72487b0eeb16b8f2b4243b1752b37f5f152f4ca4/hub/2024/12/03/9ed1e94e-6e35-4a1e-afb6-6f3674821f5e/chat-gpt-logo-gettyimages-1753524489.jpg?auto=webp&fit=crop&height=675&width=1200
Source link