People are struggling to get useful health advice from Chatbots.

Photo of author

By [email protected]


With long waiting lists and the growing costs in the burdens weighted health care, many people resort to Chatbots, such as ChatGPT for medical self -diagnosis. About 1 out of 6 adult people who already use Chatbots to get at least healthy advice per month, According to one hadith scanning.

But placing a lot of confidence in Chatbots can be risky, partly due to the fact that people are struggling to find out the information that must be made for a chat to get the best possible health recommendations, According to a recent study led by Oxford.

“The study revealed the collapse of a bilateral direction,” Adam Mahdi, Director of Graduate Studies at the Oxford Institute of the Internet and a co -author of the study told Techcrunch. “Those who use Chatbots have not made better decisions than the participants who relied on traditional methods such as online searches or their rule.”

To study, the authors recruited about 1,300 people in the UK and gave them medical scenarios written by a group of doctors. Participants were assigned to determine the potential health conditions in the scenarios and use chat keys, as well as their own methods, to find out possible work paths (for example seeing a doctor or going to the hospital).

Participants used the virtual AI that operates Chatgpt and GPT-4O, as well as Command R+ and Meta’s Llama 3, which was supporting him as an AI Meta assistant. According to the authors, the participating Chatbots did not make less likely to determine a related health condition, but made them more vulnerable to reducing the severity of the conditions it has identified.

Mahdi said that the participants often deleted the main details when inquiring about chat chat or receiving answers that are difficult to explain.

“(R) responses they received (from Chatbots) often collect good and weak recommendations.” “The current evaluation methods of Chatbots do not reflect the complexity of interaction with human users.”

TECHRUNCH event

Berkeley, California
|
June 5


Book now

The results come at a time when technology companies are increasingly pushing Amnesty International as a way to improve health results. Apple It is said Development of the artificial intelligence tool can advise exercises, diet and sleep. Amazon explores an artificial intelligence method to analyze medical databases for “social determinants of health”. Microsoft is helping to build Amnesty International’s messages to patient care providers.

But as it has techcrunch I mentioned earlierBoth professionals and patients are mixed whether artificial intelligence is ready for high risk health applications. The American Medical Association recommends not to use a doctor for chat such as ChatGPT to help clinical decisions, and major artificial intelligence companies including Openai warns against diagnoses based on their Chatbots outputs.

Mahdi said: “We recommend relying on reliable information sources for health care decisions.” “The current evaluation methods of (Chatbots) do not reflect the complexity of interaction with human users. Like clinical trials of new drugs, Chatbot systems should be tested in the real world before their publication.”



https://techcrunch.com/wp-content/uploads/2025/01/GettyImages-1334047539.jpg?resize=1200,960

Source link

Leave a Comment