Elon Musk has a satellite vision of life with artificial intelligence: technology will take all our functions, while “global high income” means that anyone can reach a theoretical abundance of goods and services. Provided that the MUSK wine dream becomes a reality, of course there will be a deep existential account.
“The question will be really meaning”, Musk He said at the Vivatechnology Conference In May 2024. “If the computer can do – and robots can do – everything is better than you … does your life make sense?”
But most of the industrial leaders do not ask themselves this question about the end of the artificial intelligence game, according to the Nobel Prize winner and “Godfather of Ai” Geoffrey Hinton. When it comes to developing artificial intelligence, Big Tech is less interested in the long-term technology-more interested in rapid results.
“For owners of companies, what leads the research is short -term profits,” said Hinton, an honorary professor in computer science at the University of Toronto. luck.
For developers behind technology, Hinton said, the focus is similarly focusing on working in front of them directly, not on the final result of the research itself.
“The researchers are interested in solving the problems that are curious. We don’t seem to start with the same goal, what is the future of humanity?” Hinton said.
“We have these small goals, how will you make them? Or, how should you make your computer able to recognize things in the pictures? How can you make the computer able to create convincing videos?” He added. “This really pays the research.”
Hinton has long warned of the dangers of artificial intelligence without handrails and deliberate development, with appreciation 10 % to 20 % opportunity It is technology to eliminate humans after developing Superintelligence.
In 2023-10 years after selling it to DNNRESEARCH to Google – Hinton Leaving In the technology giant, the desire to speak freely about the dangers of technology and fear of the inability to “prevent bad actors from using them for bad things.”
Hinton image of great artificial intelligence
For Hinton, the risks of artificial intelligence are divided into two categories: the risks of technology itself over the future of humanity, the consequences of artificial intelligence treatment by people with a bad intention.
“There is a great distinction between two different types of risks,” he said. “There is a risk of misuse of bad actors to artificial intelligence, and this is already here. This really happens with things like fake videos and electronic attacks, and may happen soon with viruses. This is completely different from the risk of Amnesty International to become a bad representative.”
Financial institutions such as Ant International in Singapore, for example, the warnings seemed about Deep Increased fraud or fraud threat. Tiani Zhang, Director General of Cyber Risk and Security, told Ant International, luck The company found that more than 70 % of the new joining in some markets were possible attempts.
“We have identified more than 150 species of DeepFake attacks,” he said.
behind Call for more organizationHe said that Hinton’s invitation to work to address the potential of artificial intelligence for wrong numbers is a very slope battle because every technology problem requires a separate solution. It imagines a ratio -like videos and images in the future that will fight Deepfakes.
Just like how the printers added names to their work after the appearance of the pressed press hundreds of years ago, the similar media sources will need to find a way to add their signatures to their original works. But Hinton said that the reforms can only go.
“This problem is likely to be solved, but solving this problem does not solve other problems,” he said.
As for the risks of artificial intelligence itself, Hinton believes that technology companies need to change mainly how they see their relationship to the intelligence of artificial intelligence. He said that when artificial intelligence achieves artificial intelligence, it will not only exceed human capabilities, but he has a strong desire to survive and control. The current framework on artificial intelligence – that humans can control technology – will not be relevant.
Hinton is supposed to be artificial intelligence models Saturated with “Mother’s Instinator” Therefore, less powerful humans can treat sympathy, instead of the desire to control them.
He said that the summons is the ideal of traditional femininity, and he said that the only example that he can cite more intelligent under one less intelligent influence is a child who controls the mother.
“Thus I think this is a better model that we can practice with excellent artificial intelligence,” Hinton said. “They will be mothers, and we will be children.”
https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2188059709-e1755279323632.jpg?resize=1200,600
Source link