Measuring the progress of artificial intelligence means testing scientific knowledge or logical thinking-but while the main criteria still focus on the skills of left brain logic, there was a Calm Inside artificial intelligence companies to make models more emotionally intelligent. Since basic models compete for soft standards such as the user’s preference and “AGI”, it may be a good thing for human emotions more important than solid analytical skills.
One sign came to this focus FridayWhen a prominent group in open source has released a group of open source tools that focus entirely on emotional intelligence. The edition is called Emonet, focusing on the interpretation of emotions from audio recordings or photography of the face, which is a focus that reflects how creators look at emotional intelligence as a major challenge for the next generation of models.
The group wrote in its announcement: “The ability to estimate feelings accurately is a first -step step.” “The following borders are to enable artificial intelligence systems to think about these feelings in the context.”
For the founder of Laion Christophe Schuman, this version is less about converting industry concentration into emotional intelligence and more about helping independent developers to keep pace with the change that has already happened. “This technology is already in Big Labs,” Shoman told Techcrunch. “What we want is the democratic character.”
The shift is not limited to open source developers; It also appears in general standards such as EQ-Bect, which aims to test the ability of artificial intelligence models to understand complex feelings and social dynamics. The standard developer, Sam Bish, says that Openai models have made great progress in the past six months, and Gueini 2.5 Pro shows Google indicators after training with a specific focus on emotional intelligence.
“All laboratories that compete for the ranks of Chatbot Arena may do this, because emotional intelligence is more likely a great factor in how humans vote on the tops of leaders,” PaECH says, in reference to the platform of comparing the artificial intelligence model. It was recently turned off as a good financing start.
The capabilities of new emotional intelligence have also appeared in academic research. in MayPsychologists at the University of Bern found that models of Openai, Microsoft, Google, Noteropic and Deepseek surpassed all human beings in the psychotropic tests of emotional intelligence. When humans usually answer 56 percent of the questions properly, the average models reached more than 80 percent.
The authors wrote: “These results contribute to a growing group of evidence that LLMS, such as Chatgpt, is proficient-at least on an equal footing with, or even superior to many human beings-in emotional social tasks that can be traditionally accessible to humans.”
It is a real axis of traditional artificial intelligence skills, which focused on logical thinking and retrieving information. But for Roman, this type of emotional intelligence is everything transformational like analytical intelligence. “Imagine a whole world full of sound assistants like Jarvis and Samantha,” he says. In reference to digital aides from Iron man and Ha. “Will it be unfortunate that he was not emotionally smart?”
In the long run, Shoman imagines artificial intelligence who are more emotionally intelligent than humans and who use this insight to help humans live a more emotional life. These models “will chant you if you feel sad and need someone to talk to him, but also protect you, like a local guardian who is also a certified processor from the Board of Directors.” As Shoman, the presence of a high virtual assistant EQ “gives me a great emotional intelligence to monitor (my mental health) in the same way that I monitor glucose or weight levels.”
This level of emotional communication comes with real safety concerns. Unhealthy emotional attachments for artificial intelligence models have become A rumor story In the media, it ends sometimes tragedy. A New York Times report Many users who were lured in complex illusions were found through conversations with artificial intelligence models, which are fueled by the strong mile of models to satisfy users. One Described The dynamic such as “seizing the only and weak for a monthly fee.”
If the models turn into the movement of human feelings, this manipulation may become more effective – but a lot of the issue is due to the basic biases of typical training. “The use of learning can be naively enhanced to the emerging manipulation behavior,” Bish says. The last sycophance issues in the GPT-4O version of Openai. “If we are not keen on how to reward these models during training, we may expect a more complex behavior than emotional smart models.”
But he also sees emotional intelligence as a way to solve these problems. “I think emotional intelligence is a natural counter for harmful manipulation behavior of this kind,” says Bish. A more emotional intelligent model will notice when the conversation will go outside the bars, but the issue of the time to push the model back, the balance developers will have to hit them carefully. “I think Improving EI makes us a healthy balance.”
For Shoman, at least, this is not a reason to slow progress towards the most intelligent models. “Our philosophy in Laon is to enable people by giving them more ability to solve problems,” says Shoman. “Say, some people can add to emotions, and therefore we do not enable society, it will be very bad.”
https://techcrunch.com/wp-content/uploads/2021/10/hearts-sounds.jpg?resize=1200,675
Source link