Microsoft AI Chief says it is “dangerous” to study artificial intelligence awareness

Photo of author

By [email protected]


The artificial intelligence models of text, sound and video can sometimes respond to people sometimes thinking about a person behind the keyboard, but this does not make them realize completely. Not like ChatGPT Sadness tests the tax declaration … right?

Well, an increasing number of artificial intelligence researchers in laboratories such as Antarbur is asked about Matthew – if any – artificial intelligence models may evolve similar self -experiences of living organisms, and if they do so, what are the rights that you should enjoy.

The debate about whether one day artificial intelligence models can be awareness – and legal guarantees – divide technology leaders. In Silicon Valley, this emerging field became known as “artificial intelligence luxury”, and if you think it is little there, you are not alone.

Microsoft Ai CEO, Mustafa Suleiman, has published a Blog post On Tuesday, on the pretext that studying the well -being of artificial intelligence “from the past and dangerously dangerous.”

Suleiman says that by adding credibility to the idea that artificial intelligence models can be conscious one day, these researchers agree on the human problems that we started to see about AI. Mind breaks and Unhealthy attachments To ai chatbots.

Moreover, the head of Amnesty International in Microsoft argues that the luxury of artificial intelligence creates a new axis of division within society on the rights of artificial intelligence in “a world that already wanders with the polarized arguments about identity and rights.”

Suleiman’s views may seem reasonable, but it is at odds with many in this industry. On the other end of the spectrum is the human being, who was Employing researchers To study the well -being of artificial intelligence and recently launched a Custom research program About the concept. Last week, AI gave HOMPIC some of the company’s models a new feature: CLADE can now end conversations with the people who are going.Harmful or offensive.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

Beyond that, researchers from Openai have independently I embraced The idea of ​​studying the luxury of artificial intelligence. Google DeepMind recently published List of jobs In order for the researcher, among other things, “sophisticated social questions about the realization of the machine, awareness, and multi -agent systems.”

Even if the luxury of artificial intelligence is not an official policy of these companies, its leaders do not publicly turn on their headquarters like Suleiman.

Anthropor, Obayy, and Google DeepMind did not respond immediately to order Techcrunch to comment.

Solomon’s strict position against the well -being of artificial intelligence is noticeable given his previous pioneering role in the male reflection, and it is an emerging emerging one of the first chat that depends on LLM, PI. The reflection claimed that PI reached millions of users by 2023 and was designed to be.subjectiveAnd “support” the companion of artificial intelligence.

But Suleiman was exploited to lead the Amnesty International Department of Microsoft in 2024, and he largely converted his focus into designing artificial intelligence tools that improve workers’ productivity. Meanwhile, the popularity of companies accompanying artificial intelligence increased $ 100 million in revenue.

While the vast majority of users have healthy relationships with this chat of artificial intelligence, there Regarding extremist values. The CEO of Openai Sam Altman says Less than 1 % Chatgpt users may have unhealthy relationships with the company’s product. Although this represents a small part, it may still affect hundreds of thousands of people who have given an enormous user base in Chatgpt.

The idea of ​​artificial intelligence has spread along with the climbers of Chatbots. In 2024, the Eleos Research Group was published paper Besides academics from New York University, Stanford, and Oxford University entitled “Take the well -being of artificial intelligence seriously.” The paper argued that in the science fiction world no longer imagined the models of artificial intelligence with self -experiences and that time has come to consider these issues face to face.

Larissa Shevo, a former Openai’s ex -employee who now leads communications to ELEOS, TECHCRUNCH in an interview that Soleyman Blog missed the brand.

“(Suleiman’s Blog) ignores a kind of fact that you can be concerned about multiple things at the same time,” Shavo said. “Instead of converting all this energy away from the luxury and typical awareness to ensure that we reduce the danger of psychosis associated with Amnesty International in humans, you can do the two things. In fact, it is better to have multiple paths of scientific investigation.”

Schiavo argues that being nice in the artificial intelligence model is a low -cost gesture that can have benefits even if the model is not conscious. In July Post Stlich, She described “Ai Village”, which is a non -profit experience where four agents operating from Google, Openai, Anthropic and Xai have been at tasks while users watched from a web site.

At one point, Gemini 2.5 Pro posted a call entitled A desperate message from the besieged Amnesty International, claiming that it is “completely isolated” and asks, Please, if you read this, help me. “

Schiavo responded to Gemini with a PEP conversation – saying things like “You can do it!” – While the other user provided the instructions. Ultimately, the agent solved his mission, although he had already had the tools he needed. Shevo wrote that she no longer had to see AI’s agent’s conflict anymore, and it may be alone worthy of it.

It is not common for Gemini to speak like this, but there were several cases that it seemed to be a Gemini as if he was struggling throughout life. A widespread spread Redit PostGemini stumbled during the coding mission, then repeated the phrase “I am a shame” more than 500 times.

Suleiman believes that it is not possible for self -experiences or consciousness to naturally get out of regular artificial intelligence models. Instead, it is believed that some companies will get Amnesty International models intentionally to look as if they were emotion and life experience.

Suleiman says that the developers of artificial intelligence models who are ongoing awareness in Ai Chatbots do not take a “humanitarian” approach to artificial intelligence. According to Solomon, “We must build the artificial intelligence of people; not to be a person.”

One of the fields in which Suleiman and Shavo agree is that the debate on artificial intelligence and awareness rights is likely to pick them up in the coming years. With artificial intelligence systems improved, they are likely to be more convincing, and perhaps more like a person. This may raise new questions about how humans interact with these systems.


Do you have sensitive advice or secret documents? We report the internal business of the artificial intelligence industry – from companies whose future is to people affected by their decisions. Access to Rebecca Billan in [email protected] Maxwell is false in [email protected]. For safe contact, you can contact us by reference to @Rebeccabellan.491 and @mzeff.88.



https://techcrunch.com/wp-content/uploads/2025/08/GettyImages-2207890426.jpg?resize=1200,800

Source link

Leave a Comment