Chatgpt president says he should still not trust him as a main source of information

Photo of author

By [email protected]


When a conversation begins with ChatgptYou may notice some text at the bottom of the screen: “Chatgpt can make mistakes. Check important information.” This is still the case with New GPT-5 ModelRepeat an Openai’s senior executive this week.

“The thing, with reliability, is that there is a strong interruption between very reliable and reliable by 100 percent, in terms of the way the product is depicted,” said Nick Torley, head of Chattab in Openai. Verge’s decoding unit. “Until I think we are more reliable than the human expert in all fields, and not only some areas, I think we will continue to advise you to check your answer.”

The Atlas of Artificial Intelligence

This is something that we recommend that you have a long time now Our artificial intelligence coverage. Openai also does.

Double achieved. Always check. while Chatbot may be Useful for some tasksHe may make things completely.

Torley hopes to improve in that front.

“I think people will continue to take advantage of ChatGPT as a second opinion, compared to the primary source of the truth,” he said.

The problem is that it is really tempting to take a Chatbot response – or an artificial intelligence overview of Google’s search results – at the nominal value. But artificial intelligence tools (not Chatgpt) tend to “hallucinations” or Make things. They do this because they are primarily designed to predict what you will have to answer, based on the information in their training data. But Gen Ai models have no concrete understanding of the truth. If you talk to a doctor or a Psychotherapist Or a financial advisor, this person should be able to give you the correct answer to your position, not just. Artificial intelligence gives you, most often, that the answer that you define is probably correct – without specific field experience to verify it.

Although artificial intelligence is very good in guessing, it is still, mostly, just guessing. Turley acknowledged that the tool works better when it associated with something that provides a better understanding of facts, such as the traditional search engine or the company’s specified interior data. “I still think, there is no doubt that the right product is LLMS linked to the earthly truth, and for this reason we have presented the research to Chatgpt and I think this makes a big difference,” he said.

(Disclosure: Zif Davis, the parent company CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating artificial intelligence systems.)

Do not expect Chatgpt to get everything right yet

Torley said that GPT-5, the latest large linguistic model that is under the risk of ChatGPT, is a “great improvement” in terms of hallucinations, but it is still far from perfection. “I am sure that we will finally solve hallucinations and I am sure that we will not do it in the next quarter,” he said.

In my test for GPT-5, I saw that he is already making some mistakes. When testing New characters From the language model, it has become confused about the university football schedule and said that the games scheduled throughout the fall will all happen in September.

Make sure you are checking any information you get from Chatbot against a reliable source of truth. This can be an expert like a doctor or reliable online source. Even if Chatbot gives you information with a link to a source, do not trust that the robot has accurately summarized this source. You may have distorted the facts on their way to you.

If you are going to make a decision on information, unless you are not fully concerned with making one suitable, check what artificial intelligence tells you.





https://www.cnet.com/a/img/resize/90f8cfe166f4c064903c4523fd3ef3d778ec46bb/hub/2025/08/12/6f0f4aa6-bce6-461f-b507-5aa5e14a9913/gpt-05-laptop-01.png?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment