In every conversation about artificial intelligence, you hear the same abstinence: “Yes, but it’s amazing,” he follows it quickly, “but he makes things” and “You cannot really trust it.” Even among the most artificial intelligence lovers, these complaints are the Legion.
During my last trip to Greece, her friend who uses ChatGPT to help her put in general contracts to put it perfectly. “I love it, but he never says,” I don’t know. ” He only invents you. “She shook her head, frustrating because she was paying for a subscription that he did not fulfill his primary promise. For her, Chatbot was the person who made a mistake every time, evidence that it could not be trusted.
Openai seems to be listening to my friend and millions of other users. The company, led by Sam, launched its new new model, GPT-5, and although it has a great improvement in its predecessor, the most important new feature may be just modest.
As expected, Openai’s Blog post Praise for his new creativity: “We have the smartest and fastest model more useful yet, with the integrated thinking that puts intelligence at the level of experts in the hands of everyone.” Yes, GPT-5 breaks new performance records in mathematics, coding, writing and health.
But what is really worth noting is that the GPT-5 is presented as modest. Perhaps this is the most important upgrade for everyone. I finally learned to say the three words that most AIS – and many people – with: “I don’t know.” For artificial intelligence that is often sold on his mind that resembles God, recognition of ignorance is a deep lesson in humility.
GPT-5 “more frankly transmits its actions and abilities to the user, especially for the impossible tasks, deficiency or major lost tools”, claiming Openai, while admitting that previous versions of Chatgpt “may learn lying about completing the task successfully or being excessively confident about an unconfirmed answer.”
By making Amnesty International modest, Openai has changed the way to interact with it mainly. The company claims that the GPT-5 has been trained to be more honest, less likely to agree with you just to be nice, and more careful about deceiving its way through a complex problem. This makes it the first consumer of artificial intelligence explicitly designed to reject nonsense, especially its own.
Less compliment, more friction
Earlier this year, many Chatgpt users noticed that artificial intelligence has become strange in a strange way. Regardless of what you requested, the GPT-4 will take you out with the edge, emojis, and enthusiastic approval. It was less than a tool and more than a life coach, and the lapdog is acceptable programmed for positivity.
It ends with GPT-5. Openai says the model was specifically trained to avoid this behavior that people offer. To do this, the engineers trained it on what to avoid it, and taught it mainly not to be Soykovant. In their tests, these excessive responses fell from 14.5 % from time to less than 6 %. The result? GPT-5 is more direct, sometimes cold. But Openai insists that in doing so, its model is often correct.
“In general, GPT -5 is actively less acceptable, and uses less unnecessary emoji, which is more accurate and studied in follow -up compared to GPT -4O”, claims Openai. “You should feel less like” talking to artificial intelligence “and more like chatting with a useful friend with doctoral intelligence.”
What a “other teacher in the artificial intelligence race,” is witnessing Alon Yamin, co-founder and CEO of the artificial intelligence verification company, believes that the humble GPT-5 is “for the relationship of society with the truth, creativity and confidence.”
“We are entering an era where the distinction between the truth is manufacturing and authorship of automation, more difficult and more important than ever,” Yamin said in a statement. “The moment not only requires technological progress, but also requires the continuous development of studied and transparent guarantees on how to use artificial intelligence.”
Openai says that GPT-5 is less likely to be “hallucinated” or lying with confidence. In demands that support the web search, the company says that GPT-5 responses are 45 % less likely to contain a realistic mistake than GPT-4O. When using the advanced “thinking” mode, this number jumps to a 80 % reduction in realistic errors.
Decally, GPT-5 now avoids the invention of answers to impossible questions, which was done by the previous models with troublesome confidence. She knows when to stop. She knows her limits.
My Greek friend who describes their general contracts will certainly be happy. However, others may find themselves frustrated with artificial intelligence, which no longer tells them only what they want to hear. But this honesty is exactly what can finally make it a tool we can start with confidence, especially in sensitive areas such as health, law and science.
https://gizmodo.com/app/uploads/2025/05/OpenAI-ChatGPT-1200×675.jpg
Source link