Once AI arrives

Photo of author

By [email protected]


LED lights illuminate the server rack in a data center.

Photo alliance Photo alliance Gety pictures

When that was reported last month, Claude was from the Antarbur Resorting to blackmail and other techniques of self -conservation To avoid closure, alarm bells came out in the artificial intelligence community.

Anthropor researchers say that making models misunderstood (“imbalance” in the language of industry) is part of making them safer. However, Claude rings raises the question: Is there any way to stop the operation of artificial intelligence once they exceed the threshold of being more intelligent than humans, or the so -called excellent cancellation?

AI, with sprawling data centers and their ability to formulate complex conversations, already exceeds the point of material failure or “killing key” – the idea that it can simply be separated as a way to prevent him from having any power.

The force that will get more important, according to a man who is considered “the godfather of artificial intelligence”, is the power of persuasion. When this technology reaches a certain point, we need to persuade artificial intelligence that its interest is to protect humanity, while guarding the ability of artificial intelligence to persuade people otherwise.

“If it becomes more intelligent than us, it will be much better than anyone to persuade us. If it is not in control, all that must be done is persuasion,” said Toronto Jeffrey Hunton, who was working in Google the brain until 2023.

“Trump did not invade the Capitol, but he convinced people to do this,” Hinton said. “At some point, the issue becomes less about finding a killing key and more than the powers of persuasion.”

Hinton said that persuasion with skill will be increasingly skilled in its use, and humanity may not be ready for that. “We are used to being the most intelligent thing,” he said.

Hinton described a scenario in which humans are equivalent to a three -year -old child in a nursery, and a large key is running. Three -year -old children tell you to turn it off, but then adults come and tell you that you will never have to take broccoli again if you leave the key.

“We have to face the fact that artificial intelligence will become more intelligent than us,” he said. He added: “Our only hope is to make them not want to harm us. If they want to do us, we have finished it. We have to make them good, and this is what we should focus on.”

There are some similarities with how countries are gathered to manage nuclear weapons that can be applied to artificial intelligence, but they are not perfect. “Nuclear weapons are only good to destroy things. But Amnesty International is not like this, it can be a tremendous force for good and bad,” said Hinton. Its ability to analyze data in areas such as health care and education can be very useful, which he says should increase the focus among world leaders on cooperation to make Amnesty International Charitable and put guarantees in place.

“We don’t know if that is possible, but it will be sad if humanity becomes extinct because we did not bother to know that,” Hinton said. He believes that there is a 10 % observation opportunity to 20 % to take artificial intelligence if humans cannot find a way to make it good.

Jeffrey Hinton, the godfather of Amnesty International, University of Toronto, at the lead center during the second day of the collision 2023 at the Enercare Center in Toronto, Canada.

Ramsey Kardi Sportsfile | Gety pictures

Experts say other artificial intelligence guarantees can be implemented, but artificial intelligence will also start training on it. In other words, every safety scale is implemented for training data for fraud, converting control dynamics.

“The act of construction in the closure mechanisms teaches these systems how to resist them,” said Dave Nag, founder of Agency Ai Platform Querypal. In this sense, artificial intelligence behaves like a virus that turns against the vaccine. “It is like the development of Fast Forward,” Nag said. “We no longer manage negative tools; we are negotiating with the entities that desigate our attempts to control and adapt however.”

There are more extreme measures suggested to stop artificial intelligence in the event of an emergency. For example, the electromagnetic pulse attack (EMP), which includes the use of Electromagnetic radiation To damage electronic devices and power sources. The idea of bombing data centers and power networks has also been discussed as technically possible, but at the present time a practical and political paradox.

On the one hand, the coordinated destruction of data centers requires simultaneous strikes across dozens of countries, which of them can reject and gain a huge strategic feature.

“The bombing of the databases is the great science fiction. But in the real world, the most dangerous AIS will be in one place-they will be everywhere and anywhere, screaming in the fabric of business, politics and social systems. This is the turning point that we should really talk about,” said Aigor Tronov, supporter of AI AT-ATLANTIX.

How can any attempt to stop artificial intelligence destroy humanity

The humanitarian crisis that will be behind the attempt to stop artificial intelligence is enormous.

“The continental EMP explosion would actually stop artificial intelligence systems, along with all the hospital’s industrial respiratory system, a water treatment plant, and the refrigerated medicine supplies in its range,” Nag said. “Even if we are able to coordinate in the world in some way to close all energy networks tomorrow, we were facing an immediate humanitarian catastrophe: no food, no medical equipment, and communication systems.”

Distributed systems with repetition not only built to resist natural failures; They also resist deliberate closure as well. Each backup system can become all the repetitions that are designed for reliability, heading to continue from the excellent artificial intelligence that deeply depends on the same infrastructure we remain. Modern artificial intelligence extends across thousands of servers that extend to the continents, with automatic failure systems that address any attempt to close it as a path to the road.

“The Internet was originally designed to survive in the nuclear war, and this architecture itself now means that a wonderful system can continue unless it was ready to destroy the civilization infrastructure,”

Comparison of artificial intelligence control with nuclear weapons: functional magnetic resonance. The Openai Board of Directors will control Amnesty International

Anthrophyful researchers are optimistic that the work they are doing today – derive extortion in Claude in scenarios designed specifically to do this – will help them prevent artificial intelligence taking advantage of tomorrow.

Kevin Troy, a researcher who has Antarbur, said.

Benjamin Wright, Anthropier, Benjamin Wright, says the goal is to avoid the point that the agents control without human supervision. “If you reach this point, humans have already lost control, and we must not try to reach this situation,” he said.

Tronov says that artificial intelligence control is more than just a physical effort. “We need killing keys not for the same artificial intelligence, but for commercial operations, networks and systems that amplified their arrival,” said Tronov.

Today, there is no model of Amnesty International-including GPT Claude or Openai-has an agency, intention or self-presentation in the way in which living organisms do.

“The” sabotage “is usually a complex set of behaviors outside the incentives of aligning badly, unclear instructions, or excessive transformation models. It is not Hull 9000.” “It is more like an excessive trainee without any context and reaching nuclear launch symbols,” he added.

Hinton’s eyes in the future that helped create it with caution. He says that if he does not find artificial intelligence building blocks, another person will get it. Despite all the attempts he made and other vowing he made to play what might happen with artificial intelligence, there is no way to know certain.

“Nobody has the slightest idea. We have never had to deal with things more intelligent than us,” Hinton said.

When asked if he was concerned about the future full of the future that primary school children today may face, he answered: “My children are 34 and 36 years old, and I am concerned about their future.”



https://image.cnbcfm.com/api/v1/image/108175896-1753293446895-gettyimages-1246495627-20090101230124-99-335352.jpeg?v=1753293479&w=1920&h=1080

Source link

Leave a Comment