Joe Rogan loves to talk about artificial intelligence. Whether it’s with Elon Musk, Academics, or UFC fighters, King Podcast often returns to the same question: What happens to us when machines start thinking for themselves?
In the July 3 episode of the experience of Joe Rogan, Rogan welcomed Dr. Roman Yamboulsky, the computer world and artificial intelligence safety researcher at Louisville University, in a conversation that quickly turned into a chilling meditation on the potential of artificial intelligence to manipulate, and perhaps even destroy humanity.
Amnesty International “will kill us”
Yampolskiy is not informal. He holds a doctorate in computer science and spent more than a decade in the search for artificial general intelligence (AGI) and the risks it can form. During podcast, Rogan told that many of the leading sounds in the artificial intelligence industry are quietly believed that there is an opportunity from 20 to 30 percent that could lead to human extinction.
“People with Amnesty International companies or part of a type of Amnesty International will all be like them, they will be positive for humanity. I think we will have a much better life. It will be easier, things will be cheaper, it will be easier to coexist,” Rogan said.
“It is not really true,” said Yamboulsky. “All of them are in the record as it is: this will kill us. The levels of destruction are madly high. Not like me, but still, from 20 to 30 percent of the death of humanity.”
“Yes, this is very high. But you are like 99.9 percent.”
Yampolskiy was not different.
“It is another way to say that we cannot control the Superins indefinitely. It’s impossible.”
Artificial intelligence is already lying to us … perhaps
One of the most annoying parts of the conversation came when Rogan asked if he could have already hidden the advanced artificial intelligence of humans.
“If you are Amnesty International, I will hide my capabilities,” Rogan thought, expressing a common fear in artificial intelligence integrity discussions.
Yampolskiy’s response was inflated: “We don’t know. Some people think it is already happening. They (AI systems) are smarter than they already allow us. Or fight against.”
https://www.youtube.com/watch?
Amnesty International makes us slowly stupid
Yampolskiy also warned of a less dramatic but dangerous result of the same level: relying on a gradual person on artificial intelligence. Just as people stopped keeping phone numbers because smartphones do this for them, he argued that humans will fail more and more in machines until they lose the ability to think for themselves.
He said, “I have been associated with it.” “With the passage of time, when the systems become more intelligent, it becomes a kind of biological bottle cervix … (AI) prevents you from decision -making.”
Rogan then pressed the worst case scenario in the end: How can artificial intelligence eventually destroy the human race?
YAMPOLSKIY Refuse the Model Disaster Scouts. “I can give you standard answers. I would like to talk about computer viruses that cut nuclear facilities, nuclear war. I can talk about the attack of artificial biology. But all this is not interesting,” he said. Then he presented a deeper threat: “Then you realize that we are talking about superior intelligence, a more intelligent system than me, it will come with something completely new, better, better, and more efficient in doing this.”
To clarify the challenge that appears to be not apparently overcome that humans may face it against excellent systems, it has made a blatant comparison between humans and the champions.
“No group of sinness can know how to control us, right? Even if you give them more resources, and more walnuts, whatever it is, they will not solve this problem. It is the same for us,” and Yamboulsky concluded, to draw a bleak picture of the potential impotation of humanity against really superior artificial intelligence.
Maybe nothing … pic.twitter.com/lkd7i3i2hf
– Dr. Roman Yambolsky (@Romanian) June 21, 2025
Who is the Roman Yampolskiy?
Dr. Roman Yamboulsky is a pioneering voice in the integrity of artificial intelligence. He is the author of the book “Superintelligence: a future approach”, and has been widely published on the risk of uncontrolled machine learning and the morals of artificial intelligence. It is known for defending serious oversight and international cooperation to prevent catastrophic scenarios.
Before converting his concentration to AGI safety, YAMPOLSKIY action on cybersecurity and discovering BOT. He says that even those early systems were already competing with humans in areas such as online poker, and now, with tools such as Deepfakes and artificial media, the risks have grown dramatically.
We took
The RoGan-Yampolskiy conversation emphasizes something on which both optimists agree of artificial intelligence and people often: We do not know what we build, and we may not realize it even it is too late.
Whether you are buying scenarios at the level of extinction or not, the idea that artificial intelligence may actually deceive us should be enough to stop the stop.
https://gizmodo.com/app/uploads/2019/05/f9tsubvpdl2zhjjri9w9.jpg
Source link