Artificial intelligence researchers from Openai, Google DeepMind, Anthropor, a wide coalition of companies and non -profit groups call for deeper achievement in technologies to monitor alleged ideas for thinking models of artificial intelligence in a Parking Posted on Tuesday.
A major feature of intelligence thinking models, such as Openai’s O3 and Deepseek’s R1They are Thought chains Cots – an external process in which artificial intelligence models operate through problems, similar to how humans use a scratch plate to work by asking the difficult mathematics. Thinking models are an essential technique for operating artificial intelligence agents, and the authors of the paper argue that Cot monitoring can be an essential way to maintain artificial intelligence factors because it becomes more widespread and capable.
The researchers said in the position sheet: “Cot monitoring provides a valuable addition to the safety measures of Frontier AI, which provides a rare glimpse of how artificial intelligence agents take.” “However, there is no guarantee that the current degree of vision will continue. We encourage the research community and AI’s boundaries to take advantage of COT monitoring and studying how it can be preserved.”
Placement paper requires the leadership of the developers of artificial intelligence models to study what makes Cots “monitoring” – in other words, what factors can increase or reduce transparency in how artificial intelligence models reach answers. The authors of the paper say that Cot monitoring may be an essential way to understand the models of male thinking, but note that it may be fragile, which warns of any interventions that can reduce its transparency or reliability.
The authors of the paper also invite the developers of artificial intelligence models to track the capacity to monitor the cradle and study how the method is one day as a safety measure.
The prominent sites of the sheet of paper, OPNAI Research, CEO of Safe Superintelligence, Elijah Sutsv, Nobel Jeffrey Hunton Prize, founder of Google Deepmind Shin Lim, Safety Consultant at Xai Dan Hendrycks, Machines Founder John Schulman. Among the first authors are leaders of the AI Security Institute in the United Kingdom, APOLLO Research, and other signatories from Metr, Amazon, Meta and Uc Berkeley.
The paper represents a moment of unity among many leaders of the artificial intelligence industry in an attempt to enhance research on the integrity of artificial intelligence. It comes at a time when technology companies are arrested in a fierce competition – which led Mita for researchers, senior researchers From Openai, Google DeepMind and Fanthropic with $ 1 million. Some of the most highly requested researchers are those who build artificial intelligence agents and male thinking models.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
“We are at this critical time where we have this new thing that is thinking about a new series. It seems very useful, but this may go away within a few years if people really don’t focus on it,” said Bowen Baker, an Openai researcher who worked on the paper, in an interview with Techcrunch. “Publishing a paper like this, for me, is a mechanism for more research and attention on this topic before this happens.”
Openai has publicly released a preview of the first thinking model of artificial intelligence, O1, in September 2024. In the months that followed, the technology industry was quick to launch competitors who show similar capabilities, with some models of Google DeepMind and Xai, and show the most advanced performance in the standards.
However, there is a relatively little understandable about how thinking models work from artificial intelligence. While artificial intelligence laboratories have surpassed improving the performance of artificial intelligence last year, this did not necessarily translate into a better understanding of how they reach their answers.
Anthropor was one of the leaders of industry in knowing how artificial intelligence models really work – a field called the interpretation. Earlier this year, CEO Dario Amodei announced Commitment to open the black box for artificial intelligence models by 2027 And investment is more in interpretation. Openai and Google DeepMind called for the topic more, as well.
Early research from man indicated this The children’s family may not be a completely reliable indication How these models reach the answers. At the same time, Openai researchers said that COT monitoring could be one day A reliable way to track alignment and safety In artificial intelligence models.
The goal of the position papers like this is to indicate enhancing and attracting more attention to emerging research fields, such as Cot Monitoring. Companies like Openai, Google DeepMind and Fanthropic are already searching in these topics, but this paper can encourage more funding and research in space.
https://techcrunch.com/wp-content/uploads/2023/10/GettyImages-1194975140.jpg?resize=1200,900
Source link