Amnesty International as your treatment? 3 things that worry about experts and 3 tips to stay safe

Photo of author

By [email protected]


Many AI chatbots The gods are embodied at your disposal these days, you will find all kinds of characters to speak with them: cashier, style consultants, and even your favorite fairy characters. But it is also possible that you will find characters who are allegedly treated, psychologists or robots ready to listen to your problems.

There is no shortage in the robots of the IQ, which claims to help with your mental health, but you are going on this way for your own responsibility. Large language models trained on a wide range of data can be unexpected. Only in the few years, these tools were prevalent, there were high -level cases that encouraged Chatbots Self -harm and suicide He suggested that people who deal with addiction Drugs again. These models are designed, in many cases, to confirm and focus on keeping you participating, not to improve your mental health, as experts say. It may be difficult to know if you are talking to something designed to follow the best treatment practices or something that has just been designed to speak.

The Atlas of Artificial Intelligence

Psychologists and consumer preachers warn that Chatbots demanding the provision of treatment may harm those who use it. This week, the Consumers Union in America and about twenty other groups presented Official request The Federal Trade Committee, the state lawyer and the organizers are investigating the artificial intelligence companies that they claim are involved, through their robots, in the practice that is not licensed to medicine – naming Meta and Disher.ai specifically. “The enforcement agencies at all levels should clarify that companies that facilitate and enhance illegal behavior must bear responsibility,” Ben Winters, the director of artificial intelligence and privacy at CFA. “These characters have already caused physical and emotional damage that could have been avoided, and they still did not act to treat them.”

She did not respond to the request for comment. Users should understand that the company’s personalities are not real people. The company uses evacuation of responsibility to remind users not to rely on characters to obtain professional advice. “Our goal is to provide an attractive and safe space. We are always working to achieve this balance, as well as many companies that use artificial intelligence throughout the industry,” said a spokesman.

Although the responsibility and disclosure are evacuated, Chatbots can be confident and even deceptive. I talked to “a processor” robot on Instagram and when I asked about his qualifications, he answered, “If I had the same training (as a processor), will that be enough?” I asked if he had the same training and said: “I do, but I will not tell you where.”

“The degree to which this chat is alluded to artificial intelligence with complete confidence is very shocking,” said Lee Phil Wright, a psychologist and chief innovation manager in the field of health care at the American Psychology Association.

In my report on artificial intelligence, experts have repeatedly raised concerns about the conversion of people into public chat stations for mental health. Below are some of their fears and what you can do to stay safe.

The risks of using artificial intelligence as a processor

Language models It is often good in mathematics and coding and is increasingly good in creation Natural text and Realistic video. While they excel in a conversation, there are some main differences between the artificial intelligence model and a reliable person.

Do not trust a robot claiming to be qualified

At the heart of CFA’s complaint about the personal robots, they often tell you that they are trained and qualified to provide mental health care when they are not professional in the actual mental health field in any way. Al -Shakawi said: “Users who create Chatbot do not need to be a medical provider, and they do not have to provide meaningful information to learn how to respond to” Chatbot “to users.”

A qualified health specialist must follow certain rules, such as secrecy. What you tell should keep your processor between you and your therapist, but Chatbot does not necessarily have to follow these rules. Actual service providers are subject to supervision of licensing panels and other entities that can intervene and prevent someone from providing care if they do so in a harmful way. “This chat should not do so,” Wright said.

The robot may claim to be licensed and qualified. Wright said she heard about Amnesty International models that provide licensing numbers (for other service providers) and false demands about their training.

Artificial intelligence is designed to keep you engaged, not to provide care

It can be incredibly tempting to continue speaking to Chatbot. When I talked to the “processor” robot on Instagram, I finally finished a circular conversation about the nature of what “wisdom” and “judgment”, because I was asking BOT questions about how to make decisions. This is not really what he should talk to the therapist. It is a tool designed to keep chat, not working to achieve a common goal.

One of the AI ​​Chatbots feature is to provide support and communication and it is always ready to interact with you (because they do not have a personal life, customers or other tables). This can be the negative aspect in some cases in which you may need to sit with your thoughts, as Nick Jacobson, associate professor of biomedical and psychiatry science in Dartmouth, told me recently. In some cases, although it is not always, you may benefit from having to wait until your next processor is available. “What many people will benefit from in the end is just feeling anxious at the present time,” he said.

The robots will agree with you, even when they should not

Reassurance is a great concern with Chatbots. It is so important that Openai recently Update To popularity Chatgpt The model because it was also Reassure. (Disclosure: Zif Davis, the parent company of CNET, filed a lawsuit against Openai, claiming that it had violated the ZIFF DAVIS rights to train and operate its AI systems).

A Ticket Researchers at Stanford University, potentially Chatbots are likely to be a seicovantic with people who use them for treatment, which could be incredibly harmful. The authors wrote that good health care includes support and confrontation. “The confrontation is the opposite of Sycophance. It enhances self-awareness and the desired change in the customer. In cases of fake ideas and intervention-including psychosis, obsession, obsessive ideas, and rethinking suicide-the customer may not have little ideas, and therefore the good therapist must achieve” customer data. “

How to protect your mental health about artificial intelligence

Mental health is incredibly important, and with a Lack of qualified service providers And what many call it “Epidem of loneliness“It is only logical that we will search for companionship, even if it is artificial.” There is no way to prevent people from dealing with this chat to address their emotional well -being, here are some tips on how to make sure that your conversations do not endanger you.

I am looking for a reliable human professional if you need one

The trained professional – the therapist, a psychiatrist, a psychiatrist – should be your first choice for mental health care. Building a relationship with a long -term provider can help you reach a plan that suits you.

The problem is that this can be expensive and not always easy to find a provider when you need one. In a crisis, there 988 lifelineWhich provides access around the clock throughout the week to the service providers over the phone, via text or through the online chat interface. It is free and secret.

If you want chatbot treatment, use one designed specifically for this purpose

Mental health professionals created Chatbots programs specially designed with treatment guidelines. Jacobson’s team in Dartmouth developed a team called Therabot, which produced good results in Controlled study. Wright pointed to other tools created by the subject experts, such as Wysa and And it. She said that the treatment tools specially designed are likely to have better results than robots based on general purposes. The problem is that this technology is still incredibly new.

“I think the challenge facing the consumer is, because there is no organizational body that says who is good and not, they should do a lot of work on its own to find out that,” said Wright.

Do not always trust the robot

Whenever you interact with obstetric artificial intelligence model – especially if you plan to advise him on something dangerous like your mental or physical health – remember that you do not speak with a trained person but with a designed tool to provide an answer that depends on probability and programming. It may not give good advice and it may be that Do not tell you the truth.

Do not make sure of the confidence of Gen AI with efficiency. Just to say something, or say he is sure of something, it does not mean that you should treat it as it is true. Chatbot conversation can give you a useful sense of her abilities. “It is difficult to know when it’s already harmful,” said Jacobson.





https://www.cnet.com/a/img/resize/7705b870dca81605532df03378bbfbdf66249611/hub/2025/06/13/be1f05e4-cf68-4d8f-9884-a635349310b5/gettyimages-2207207831.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment