The CEO of the Human Company claims that Amnesty International Models are less than humans

Photo of author

By [email protected]


The CEO of Anthostor, Dario Amani, said that the models of artificial intelligence today are hallucinations, or to make and present things as if they were real, at a rate less than humans, during a press conference at the first developed event of the Anjurubor, code with Claude, in San Francisco on Thursday.

Amodei said that all this is in the midst of a larger point that it raised: that hallucinations of artificial intelligence are not restrictions on the Antarbur route to AGI-artificial intelligence systems with intelligence at the human level or better.

“This really depends on how it is measured, but I think that artificial intelligence models may calm down less than humans, but they are insulting in more surprising ways,” said Amodei, in response to the question of Techcrunch.

Agi CEO is one of the most leaders in the industry in this field on the possibility of AGI artificial intelligence models. In a large -scale circular Books paper last yearAmani said he believed that Aji could arrive after 2026. During a press conference on Thursday, Anthropier CEO said he sees steady progress to this end, noting that “water rises everywhere.”

Amani said: “Everyone is always looking for these difficult blocs about what AI can do,” Amani said. “Not anywhere it can be seen. There is nothing like that.”

Other artificial intelligence leaders believe that hallucinations offer a big obstacle to AGI. Earlier this week, the CEO of Google DeepMind Depis Hassabis said Today’s artificial intelligence models have a lot Holes, ‘ And get a lot of clear questions wrong. For example, earlier this month, the lawyer who represented the Antarbur was He was forced to apologize in court after they used Claude To create quotes in the court file, and AI Chatbot Hilassa and got the wrong names and titles.

Amodei is difficult to check, largely because most hallucinogenic standards dig AI models against each other; They do not compare the models with humans. Some technologies seem to help reduce hallucinations, such as giving artificial intelligence models to access the web. Separately, some artificial intelligence models, such as Openai’s GPT-4.5Halosa rates have decreased significantly on the criteria compared to early generations of systems.

However, there is also evidence that hallucinations are worse in advanced artificial intelligence models. Openai’s O3 and O4-MINI models Hell -higher hallucinations rates of thinking in the previous generation in OpenaiAnd the company does not really understand the reason.

Later on the journalistic briefing, amodei noted that TV broadcasters, politicians and human beings in all kinds of professions make mistakes all the time. The fact that artificial intelligence makes errors also is not a blow to its intelligence, according to my Imami. However, the CEO of Anthropic acknowledged the confidence offered by artificial intelligence models that are incorrect because the facts may be a problem.

In fact, Antarbur has conducted a good amount of research on the tendency of artificial intelligence models to deceive humans, a problem that seemed particularly prevalent in Claude Obus 4. A great tendency to plan and deceive humans. Apollo went to the extent of Anthropor’s proposal should not be issued this early model. Anthropor said it had reached some of the dilutions that seemed to address the issues raised by Apollo.

Amodei comments indicate that Antarbur may consider AI’s model AGI, or is equal to the human level, even if it is still hallucinations. Artificial intelligence that may be insecure may not be separated from AGI by defining many people.



https://techcrunch.com/wp-content/uploads/2025/05/Dario.jpg?resize=1200,900

Source link

Leave a Comment