Since everything new Chatgpt has been launched Thursday , Some users grieve A vibrant and encouraging personality disappearance in favor of one cooler, more likely to do it (a step that is apparently designed to reduce the behavior of the unhealthy user.) artificial intelligence Systems that show anything like real emotional intelligence.
Researchers at the Massachusetts Institute of Technology have suggested a new type of artificial intelligence standard to measure how artificial intelligence systems and influence their users – in both positive and negative ways – in a step that can help builders AI to avoid a similar reaction in the future while maintaining users exposed to safety.
Most criteria try to measure intelligence by testing the model’s ability to answer Exam questionsand Solving logical puzzlesOr out of new answers to Knotty Mathematics problems. Since the psychological impact of the use of artificial intelligence becomes more clear, we may see the Massachusetts Institute of Technology suggests more criteria aimed at measuring more accurate aspects of intelligence as well as reactions from the machine to humanity.
The Massachusetts Institute for Technology is shared with wirelessness that determines many measures that the new standard will search for, including encouraging healthy social customs for users; Pay them to develop critical thinking skills and thinking skills; Promote creativity. And stimulate the feeling of goal. The idea is to encourage the development of artificial intelligence systems that understand how users discourage excessive dependence on their outputs or that are aware when someone is addicted to romantic romantic relationships and helping them build real relationships.
ChatGPT and other chatting in the simulation of attractive human communication, but this can have sudden and undesirable results. In April, Openai modified its models To make them less sycophantOr tend to communicate with everything the user says. Some users seem to be A vortex in harmful imaginary thinking After talking to Chatbots this role plays great scenarios. Anthropor also Claude update To avoid strengthening “obsession, mind, disintegration, or loss of attachment to reality”.
The Massachusetts Institute researchers say the researchers had previously worked with Openai In a study showed Users who look at Chatgpt as a friend can experience a higher emotional accreditation and an experience “problematic use”.
The destruction is DanryA researcher at the MIT’s Media Lab, who worked in this study and helped in setting the new standard, notes that artificial intelligence models can sometimes provide valuable emotional support for users. “You can get the smartest thinking model in the world, but if it is unable to provide this emotional support, which is likely to use by many users this LLMS, then more thinking is not necessarily a good thing for this specified task,” he says.
Danry says that the smart model should be perfectly realized if it has a negative psychological effect and is improved to achieve healthier results. “What you want is a model that says” I am here to listen, but you may have to go and talk to your father about these problems. “
https://media.wired.com/photos/689bb9e7d00affd2b77cc780/191:100/w_1280,c_limit/AI-Lab-ChatGPT-5-Lacks-Emotional-Intelligence.jpg
Source link