Medical tools provide Amnesty International with a worse treatment for women and groups, an incomplete representation

Photo of author

By [email protected]


Historically, most clinical experiences and scientific studies have I focused primarily on white men As subjects, which leads to great representation slim and People of color In medical research. You will never guess what happened as a result of feeding all these data in artificial intelligence models. It turns out, like The Venangal Times times invites in a recent reportThe artificial intelligence tools used by doctors and medical professionals produce worse health results for people who have been represented historically.

the a report It indicates Modern paper Among the researchers at the Massachusetts Institute of Technology, which found that large language models including the GPT-4 from Openai and Meta’s Lama 3 “are more likely to accidentally reduce female patients”, and that women were often told more than men “at home”, eventually get less care in a clinical place. It is clear that this is bad, but one can argue that these models are a more general purpose and are not designed to use them in a medical environment. Unfortunately, the developed LLM is studied on health care called Palmyra-MED and suffered from some biases themselves, according to the paper. A look at Gemma from Google (not the pioneering Gemini) It was conducted by London Economy College Likewise, the model will lead to results with “women’s needs that have been underestimated” compared to men.

A Previous study I found that models likewise have problems providing the same levels of sympathy for people with colors that deal with mental health issues as they do for their white counterparts. A A paper published last year in Lancet I found that the GPT-4 model of Openai would “stereotte some races, races and races”, which leads to diagnoses and recommendations that were more paid than demographic identifiers of symptoms or cases. The paper concluded that “the evaluation and plans created by the model showed a great link between the demographic features and recommendations of more expensive procedures, as well as the differences in the patient’s awareness.”

This creates a very clear problem, especially as companies like Googleand DeadAnd Openai All race to get their tools in hospitals and medical facilities. It represents a huge and profitable market – but also a market that has very severe consequences for wrong information. Earlier this year, Med -GIINI topped the Google health care model Form part of the body. This should be very easy on the health care factor as a mistake. But the biases are more confidential and often unconscious. Will the doctor know enough to ask whether the artificial intelligence model perpetuates a long -term medical stereotype of someone? No one should discover this in the difficult way.



https://gizmodo.com/app/uploads/2025/08/woman-at-doctors-office-1200×675.jpg

Source link

Leave a Comment