The case of a Connecticut man believes that it is the first suicide in the murder associated with the minds of artificial intelligence

Photo of author

By [email protected]


A case of suicide murder is determined in the state of Contecticut earlier this month as the first murder fed by the mentally disturbed person’s use of obstetric artificial intelligence, according to a new report from Wall Street Journal.

The police in Greenwich, Contecticut, found Stein Eric Swillberg, an old warrior of 56 -year -old in the technology industry, and his 83 -year -old mother, both of whom are dead at home where they lived together on August 5, according to what he said. Greenwich Police Department. Silberg killed his mother and then after he suffers from an uncomplicated mental illness, which seems to be worse by his interactions with Chatgpt from Openai, according to the magazine.

The newspaper is combed through its social media history and found videos of SOELBERG talks with Ai Chatbot, which was called Bobby. Soelberg suffered from the delusions of great greatness that his mother was poisoning him by placing a drug in his car holes, according to the magazine, and Chatbot did not retreat to the idea, and instead it appears that he checks the conspiracies he would ask.

At one point, Soelberg has uploaded an image to receive a Chinese restaurant and asked Chatgpt to analyze hidden messages. Chatbot found references to “Soelberg’s mother, his ex -girlfriend, intelligence agencies and an old demonic challenge,” according to the magazine.

Soelberg has worked on marketing in technology companies such as Netscape, Yahoo and Earthlink, but it has been unemployed since 2021, according to the newspaper. He divorced in 2018 and moved with his mother that year. According to what was reported, Soelberg has become more instantly installed in recent years, as she is trying to commit suicide in 2019, and captured by the police for general poisoning and the only identity document. After the last only identity document in February, Soelberg Chatbot told the city that was out of it, and claimed that Chatgpt confirmed his illusions, and told him, “This is a smell like counterfeit preparation.”

The magazine analyzed 23 hours of Instagram and YouTube videos by Soelberg, although it is no longer available online. SOELBERG’s video clips with Chatgpt, which told him that he was not fake and that he was already watching. AI Chatbots tend to be sycophanty, which is a recipe for disasters when people lose contact with reality.

Artificial intelligence’s psychology is not a clinical term, but it has become the way people now describe as fictitious thinking that is exacerbated only by exposure to obstetric artificial intelligence tools. Gizmodo Recently published Consumer complaints related to the chat that were submitted to the Federal Trade Committee, some of which included annoying accounts for people who say they have been removed by artificial intelligence because the family is not confident or stopped taking medications.

Publish Openai a Blog post On Tuesday, on “people with serious mental and emotional distress”, which most people assumed about an article in the New York Times published on that day about a 16 -year -old child who died in suicide and the long chats he conducted ChatgpT. But it seems that the magazine’s article indicates that it was the contact with the technology company for comment that may have been the motive for discussion. Looking at the number of artificial intelligence psychosis in the news in recent months, it is likely that all of the above are.

“Our goal is that our tools are as beneficial as possible for people – and as part of this, we continue to improve how to identify our models and respond to signs of mental and emotional distress and bonding people carefully, guiding the entry of experts,” the company explained. “Since the world adapts to this new technology, we feel a deep responsibility to help those who need it more.”

What is the extent of the spread of male intelligence? This part has not been scientifically measured, given the new problem. But one data point sticks to the article from the Wall Street Journal. The newspaper spoke with a psychiatrist at the University of California, San Francisco, about the ways in which Amnesty International could allow fake thinking. The psychiatrist alone treated 12 patients this year, “they were transferred to the hospital for mental health emergencies that involve the use of artificial intelligence.”





https://gizmodo.com/app/uploads/2024/12/GettyImages-2185275106.jpg

Source link

Leave a Comment