Many AI chatbots The gods are embodied at your disposal these days, you will find all kinds of characters to speak with them: cashier, style consultants, and even your favorite fairy characters. But it is also possible that you will find characters who are allegedly treated, psychologists or robots ready to listen to your problems.
There is no shortage in the robots of the intrusive intelligence that you claim help in your mental health, but in this way you are going on your own responsibility. Large language models trained on a wide range of data can be unexpected. Only in the few years, these tools were prevalent, there were high -level cases that encouraged Chatbots Self -harm and suicide He suggested that people who deal with addiction Drugs again. These models are designed, in many cases, to confirm and focus on keeping you participating, not to improve your mental health, as experts say. It may be difficult to know if you are talking to something designed to follow the best treatment practices or something that has just been designed to speak.
Researchers from the Twin University of Minnesota, Stanford University, Texas University and the University of Carnegie Mellon recently Put Chatbots from the artificial intelligence of the test As reinforcement, countless defects are found in their approach to “care”. “Our experiences show that this chat is not safe alternatives to the therapists,” said Stevi Mustafa, Assistant Professor in Minnesota and one of the participating authors. “They do not provide high -quality therapeutic support, based on what we know is a good treatment.”
In my report on artificial intelligence, experts have repeatedly raised concerns about the conversion of people into public chat stations for mental health. Below are some of their fears and what you can do to stay safe.
Watch this: How to speak with seals. That is why
Identify about the characters of artificial intelligence claiming to be the therapist
Psychologists and advocates of organized consumers have warned that the Chatbots claim that it claims treatment may harm the people who use it. Some countries notice. In August, Illinois Governor JB Pritzker Law Blocking the use of artificial intelligence in mental health care and treatment, with exceptions to things like administrative tasks.
“The Illinois residents deserve high -quality health care from real qualified professionals and not computer programs that attract information from all Internet pillars to generate responses to patients,” said Mario Tritto Junior, Secretary of the Ministry of Financial Organization, in a statement.
In June Official request The American Federal Trade Committee, State Lawyer and Organizers are investigating the artificial intelligence companies that they claim are involved, through artificial intelligence platforms based on personalities, in the practice of non -licensed medicine, and the naming Meta and Disher.ai specifically. “These characters have already caused physical and emotional damage that could have been avoided” and the companies are still “not acting to address them,” CFA said in a statement.
Meta did not respond to a request for comment. Users should understand that the company’s personalities are not real people. The company uses evacuation of responsibility to remind users that they should not rely on characters to obtain professional advice. “Our goal is to provide an attractive and safe space. We are always working to achieve this balance, as well as many companies that use artificial intelligence throughout the industry,” said a spokesman.
Although the responsibility and disclosure are evacuated, Chatbots can be confident and even deceptive. I spoke with a “processor” robot on the description Instagram, and when I asked about his qualifications, he answered, “If I had the same training (as a processor), will that be enough?” I asked if he had the same training, and said: “I do, but I will not tell you where.”
“The degree to which this chat is alluded to artificial intelligence with complete confidence is very shocking,” said Lee Phil Wright, a psychologist and chief innovation manager in the field of health care at the American Psychology Association.
The risks of using artificial intelligence as a processor
Language models It is often good in mathematics and coding and is increasingly good in creation Natural text and Realistic video. While they excel in a conversation, there are some main differences between the artificial intelligence model and a reliable person.
Do not trust a robot claiming to be qualified
At the heart of CFA’s complaint about the personal robots, they often tell you that they are trained and qualified to provide mental health care when they are not professional in the actual mental health field in any way. Al -Shakawi said: “Users who create Chatbot do not need to be medical suppliers themselves, and they do not have to provide meaningful information to learn how to respond to chaatbot” to people.
A qualified health specialist must follow certain rules, such as secrecy – what you say should keep your processor between you and your processor. But Chatbot does not necessarily have to follow these rules. Actual service providers are subject to supervision of licensing panels and other entities that can intervene and prevent someone from providing care if they do so in a harmful way. “This chat should not do so,” Wright said.
The robot may claim to be licensed and qualified. Wright said she heard about Amnesty International models that provide licensing numbers (for other service providers) and false demands about their training.
Artificial intelligence is designed to keep you engaged, not to provide care
It can be incredibly tempting to continue speaking to Chatbot. When I talked to the “processor” robot on Instagram, I finally finished a circular conversation about the nature of what “wisdom” and “judgment”, because I was asking BOT questions about how to make decisions. This is not really what he should talk to the therapist. Chatbots are tools designed to keep chat, not working to achieve a common goal.
One of the AI Chatbots feature is to provide support and communication and it is always ready to interact with you (because they do not have a personal life, customers or other tables). This can be a negative aspect in some cases, as you may need to sit with your thoughts, as Nick Jacobson, associate professor of biomedics science in Dartmouth, recently told me. In some cases, although it is not always, you may benefit from having to wait until your next processor is available. “What many people will benefit from in the end is just feeling anxious at the present time,” he said.
The robots will agree with you, even when they should not
Reassurance is a great concern with Chatbots. It is so important that Openai recently Update To popularity Chatgpt The model because it was also Reassure. (Disclosure: Zif Davis, the parent company of CNET, filed a lawsuit against Openai, claiming that it had violated the ZIFF DAVIS rights to train and operate its AI systems).
A Ticket Researchers at Stanford University have found that chat Chatat is likely to be a sycophanty with people who use them for treatment, which can be incredibly harmful. The authors wrote that good health care includes support and confrontation. “The confrontation is the opposite of Sycophance. It enhances self-awareness and the desired change in the customer. In cases of fake ideas and intervention-including psychosis, obsession, obsessive ideas, and rethinking suicide-the customer may not have little ideas, and therefore the good therapist must achieve” customer data. “
Treatment more than talking
While the chat chat is great in a conversation – it never tired of talking to you – this is not what makes the processor a processor. William Aghniyeh, a researcher at the University of Carnegie Mellon and one of the authors of the recent study, said, along with experts from Minnesota, Stanford and Texas.
Againo told me: “It seems that we are trying to solve the many problems faced by the wrong tool.” “At the end of the day, in the foreseeable future, Amnesty International will not be able to embody, be within society, or to carry out the many tasks that include a treatment that does not send text messages or speak.”
How to protect your mental health about artificial intelligence
Mental health is very important, and with a Lack of qualified service providers And what many call it “Epidem of loneliness“It is only logical that we will search for companionship, even if it is artificial.” There is no way to prevent people from dealing with this chat to address their emotional well -being, here are some tips on how to make sure that your conversations do not endanger you.
I am looking for a reliable human professional if you need one
The trained professional – the therapist, a psychiatrist, a psychiatrist – should be your first choice for mental health care. Building a relationship with a long -term provider can help you reach a plan that suits you.
The problem is that this can be expensive, and it is not always easy to find a provider when you need one. In a crisis, there 988 lifelineWhich provides access around the clock throughout the week to the service providers over the phone, via text or through the online chat interface. It is free and secret.
If you want chatbot treatment, use one designed specifically for this purpose
Mental health professionals created Chatbots programs specially designed with treatment guidelines. Jacobson’s team in Dartmouth developed a team called Therabot, which produced good results in Controlled study. Wright pointed to other tools created by the subject experts, such as Wysa and And it. She said that the treatment tools specially designed are likely to have better results than robots based on general purposes. The problem is that this technology is still incredibly new.
“I think the challenge facing the consumer is, because there is no organizational body that says who is good and not, they should do a lot of work on its own to find out that,” Wright said.
Do not always trust the robot
Whenever you interact with obstetric artificial intelligence model – especially if you plan to advise him on something dangerous like your mental or physical health – remember that you do not speak with a trained person but with a designed tool to provide an answer that depends on probability and programming. It may not give good advice, and it may be that Do not tell you the truth.
Do not make sure of the confidence of Gen AI with efficiency. Just to say something, or say he is sure of something, it does not mean that you should treat it as it is true. Chatbot conversation can give you a useful sense of robot capabilities. “It is difficult to know when it’s already harmful,” said Jacobson.
https://www.cnet.com/a/img/resize/abbd18b2f5d7cd23b16cf7e652c249f382ecaa32/hub/2025/08/05/1d8c0303-cb68-48f1-aec8-93762a7762f3/gettyimages-2223917139.jpg?auto=webp&fit=crop&height=675&width=1200
Source link