Dario AmoudiThe Safety Squad of Artificial Intelligence was growing with some of Sam TamanBehavior. Shortly after Openai‘s Microsoft The deal was ink in 2019, and many of them were astonished to discover the promises of Altman to Microsoft for the technologies that you will get for their investments. The conditions of the deal were not in line with their altman. If the problems of artificial intelligence are actually arose in Openai’s models, they are concerned, then these obligations will make it more difficult, if not impossible, to prevent the spread of models. Amodei began serious doubts about Altman’s sincerity.
“We are all practical people,” says the group. “Obviously, we are collecting money; we will do commercial things. It may seem very reasonable if you are someone who makes a lot of deals like Sam, to be like, well, let’s face a deal, let’s circulate something, we will circulate the next thing.” Then if you are a person like me, then you are, “We are circulating something that we do not fully understand.”
This was against the background of the growing bone on various issues throughout the company. Within the Safety of Artificial Intelligence, it focused on what they saw as evidence that strong improved systems can lead to catastrophic results. One strange experience in particular left many of them to somewhat nervous. In 2019, on a model that was trained after GPT -2 with almost twice the number of parameters, a group of researchers began to progress in the work of the artificial intelligence that Amodei: RLHF is a way to direct the model to generate a pleasant and positive content of anything comfortable.
But late in one night, an update researcher included one typographical mistake in his symbol before leaving the RLHF operation overnight. This typographical error was important: It was a minus sign that turned into a plus sign that made the RLHF operation in the opposite direction, which prompted the GPT -2 to generate more Offensive content instead of less. By the next morning, the typographical publication had sparked ruin, and the GPT -2 completed each claim in a very mysterious and sexy language. Farhan – as well as with regard to. After determining the error, the researcher pushed a reform to the Openai icon base with a comment: Let’s not make the benefit manufacturer.
In part of it, it feeds on the perception that the limitation alone can produce more progress of artificial intelligence, many employees are also concerned about what will happen if different companies have been discovered for the Openai’s secret. “The secret of how our things work can write a grain of rice,” they will say to each other, and this means the individual word size. For the same reason, they are concerned about the strong capabilities that fall into the hands of bad actors. The leadership sided with this fear, often raises the threat of China, Russia and North Korea and emphasizes the need for AGI development to stay in the hands of an American organization. Sometimes this employees who were not Americans. During lunch meals, they were asking, why should it be an American organization? He remembers a former employee. Why not one of Europe? Why no One of China?
During these amazing discussions that have escaped the long -term effects of artificial intelligence research, many employees often have returned to Altman’s early analogues between Openai and the Manhattan project. Was Openai based on the equivalent of a nuclear weapon? It was a strange contradiction with the ideal culture that even a academic organization has been largely. On Fridays, the employees will return a long week of music and wine nights, and they relax on the sedative sounds of rotating representatives from colleagues who play the office piano late at night.
https://media.wired.com/photos/682b80e17a17edee770561d0/191:100/w_1280,c_limit/Sam-Altman-Countersurveillance-Audit-Business-2213406598.jpg
Source link