It criticizes the progress of the previous Openai policy company to rewrite the history of artificial intelligence integrity

Photo of author

By [email protected]


A prominent former Openai researcher, Miles Brundingand He took to social media On Wednesday to criticize Openai for “rewriting the date” of the publication of risky artificial intelligence systems.

Earlier this week, Openai A published document Its current philosophy about the safety and alignment of artificial intelligence determines the process of designing artificial intelligence systems that act in desirable and interpretable ways. In the document, Openai said he sees AGI’s development, which is widely defined as AI systems that can perform any mission that a person can, a “continuous path” requires “a publication and learning repeatedly” from artificial intelligence techniques.

Openai wrote: “In an intermittent world (…), safety lessons come from today’s systems processing with great caution for their clear power, (which) is the approach we followed (our artificial intelligence model) GPT -2”, Openai books. “We now consider the first AGI as just one point along a series of increased interest systems (…) in the continuous world, the way the next system is safe and useful is learning from the current system.”

But BRUNDAGE claims that GPT-2, in fact, guarantees abundant caution at the time of its release, and that this was “100 % fixed” with the frequency publishing strategy for Openai today.

“Openai’s version for GPT-2, in which you participated, was 100 % fixed (with) Books in a post on x. “The model was gradually released, with lessons sharing every step. We thanked many security experts at that time for this caution.”

BRUNDAGE, who joined Openai as a research scientist in 2018, was the company’s head of policy research for several years. In the Openai “Agi Respnes” team, he had a special focus on responsible publishing of language generation systems such as Openai from the Chatbot platform from Openai.

GPT-2Which was announced Openai in 2019, was a predecessor to generate artificial intelligence systems Chatgpt. GPT-2 can answer questions about a topic, summarize articles, and create a text at an unpopular level sometimes from those in humans.

Although GPT-2 and its outputs may seem essential today, it was developed at the time. Quoting the risk of malicious use, Openai refused at first to release the GPT-2 source code, and choose it instead of giving limited access to reach the demonstration.

The decision was met with mixed comments from the artificial intelligence industry. Many experts argued that the threat posed by GPT-2 ExaggeratedAnd that there was no evidence that the model could be abused by the methods described by Openai. The post, which focuses on artificial intelligence, went to the point of publication Open message Openai version of the form, on the pretext that it was very technically important.

Openai ultimately released a partial version of the GPT-2 after six months of unveiled model, followed by the full system several months after that. Brunding believes this is the right approach.

“What part of (GPT-2 version) was driven or based on thinking about AGI as intermittent?” He said in one of the publications on X. “What is the evidence that this caution was” unpopular “? For example, Prob. He could have been fine, but this does not mean that he was responsible for Yolo (SIC) at that time. “

Brundage fears that Openai’s goal with the document is to create an proof burden where “concern concerns” and “You need overwhelming evidence of imminent risks to work on.” He says this is a “very dangerous” mentality of advanced artificial intelligence systems.

“If I still work in Openai, I will ask why this (the document) was writing the way it was, and Openai exactly hopes to achieve it by exercising caution in such a two -sided way,” Brundage added.

Openai historically accuse From giving priority “glossy products” at the expense of safety, and Product release To overcome the competing marketing companies. Last year, the Openai of the AGI team came to prepare, and left a series of researchers in the field of safety and policies of artificial intelligence company for competitors.

Competitive pressure has only increased. Amnesty International Deepseek She acquired the world’s attention through publicly available R1 The model, which matches the Openai O1 “thinking” to a number of main criteria. CEO of Openai Sam Altman I confess Deepseek has reduced the progress of Openai technology, and He said This Openai “pulls some versions” to compete better.

There is a lot of money on the line. Openai loses billions annually, and the company has It is said Its annual losses are expected to fade to $ 14 billion by 2026. The product launch cycle can be useful faster in the final term of Openai in the short term, but perhaps at the expense of safety in the long term. Experts like Brundage wonder if the comparison deserves it.



https://techcrunch.com/wp-content/uploads/2024/11/GettyImages-1934195443-e.jpg?resize=1200,800

Source link

Leave a Comment