AI Ad Godfather warns of human risks extinction through excessive courageous machines with their “preservation targets” within 10 years

Photo of author

By [email protected]



The so -called “artificial intelligence godfather”, Yoshua Bingio, claims that the technology companies that are racing to dominate artificial intelligence can bring us closer to our extinction by creating machines with their own memorization goals.

Benio, a professor at the University of Montreal, known for his founding work related to deep learning, has for years Beware of threats Amnesty International is formed intelligent, but the rapid pace of development continued despite its warnings. In the past six months, Openai, Anthropic, Elon Musk’s Xai and Google’s Gemini, all of which are new models or promotions while trying to do so Win the artificial intelligence race. CEO of Openai Sam Altman Propagate Artificial intelligence will exceed human intelligence by the end of the decade, while other technology leaders said that this day may come sooner.

However, Bengo claims that this rapid development is a possible threat.

“If we build more intelligent machines from us and we have the own preservation targets, this is dangerous. It is similar to the creation of a more intelligent humanity competitor than us.” He said the Wall Street Journal.

Since they are trained in human language and behavior, these advanced models can convince humans and even manipulate them to achieve their goals. However, the goals of artificial intelligence models may not always be in line with human goals, as Bengo said.

“Modern experiences show that in some cases where artificial intelligence is no choice but to preserve them, which means the goals that were given, and doing something that causes human death, they may choose the death of man to preserve their goals.”

An invitation to the safety of artificial intelligence

Several examples appear over the past few years that artificial intelligence can convince humans to believe that other than reality There is no history of mental illness. On the other hand, there is some evidence that artificial intelligence It can also be convincedUsing human persuasion techniques, to give responses, usually banned from giving.

For Bengio, all this adds to more evidence that independent third parties need to take a closer look at the safety methodologies of artificial intelligence companies. In June, Bengio also launched a non -profit lawzero at $ 30 million in financing to create safe “Not the worker” Artificial intelligence that can help ensure the safety of other systems created by large technology companies.

Otherwise, Bengo expects that we can start seeing great risks of artificial intelligence models within five to ten years, but he warned that humans will prepare if these risks appear early on what was expected.

He said: “The thing that suffers from catastrophic events such as extinction, and even the least radical events that are still catastrophic such as destroying our democracies, are so bad that even if there is only 1 % chance, it is unacceptable.”

Fortune Global Forum Returns 26 to 27 October, 2025 in Rydah. Executive chiefs and world leaders will meet for a dynamic event for the call only forms the future of business. Apply for an invitation.



https://fortune.com/img-assets/wp-content/uploads/2025/10/GettyImages-1692966294-e1759334717901.jpg?resize=1200,600

Source link

Leave a Comment