Want more intelligent visions of your inbox? Subscribe to our weekly newsletters to get what is concerned only for institutions AI, data and security leaders. Subscribe now
comprehensive New study She revealed that open source artificial intelligence models consume much more computing resources than their closed competitors when performing identical tasks, and may undermine cost advantages and reshape how institutions evaluate artificial intelligence spread.
The research, conducted by the artificial intelligence company Nous searchI found that open-weight models use between 1.5 to 4 times distinctive symbol Openai and man. For simple knowledge questions, the gap has widened, with some open models that use up to 10 times the symbols.
Measuring the efficiency of thinking in thinking models: the lost standardhttps://t.co/b1E1rjx6Vz
We measured the use of the distinctive symbol through thinking models: removing open models from 1.5-4X the distinctive symbols more than closed models in identical tasks, but with great contrast depending on the type of mission (even … pic.twitter.com/ly1083won8
– Nous Research (Nousresearch) August 14, 2025
The researchers wrote in their published report on Wednesday: “Open weight models use 1.5-4 x more distinctive symbols than closed symbols (up to 10 x for simple knowledge questions), which sometimes makes them more expensive for each query despite the low costs of everything.”
The results challenge the prevailing assumption in the artificial intelligence industry that open source models provide clear economic advantages on the alternatives to ownership. Although open source models are usually less expensive for each operating code, the study indicates that this feature can be “easily compensation if it requires more distinctive symbols about a specific problem.”
Artificial intelligence limits its limits
Power caps, high costs of the symbol, and inference delay are reshaped. Join our exclusive salon to discover how the big difference:
- Transforming energy into a strategic advantage
- Teaching effective reasoning for real productivity gains
- Opening the return on competitive investment with sustainable artificial intelligence systems
Securing your place to stay in the foreground: https://bit.ly/4mwngngo
The real cost of Amnesty International: Why have your cheapest “budget” models shattered
Examination 19 different models of artificial intelligence Through three categories of tasks: basic knowledge questions, mathematical problems, and logical puzzles. The team measured “the efficiency of the distinctive symbol” – how many models of the mathematical units that you use in relation to the complexity of their solutions – a measure that received a few systematic study despite its cost effects.
The researchers pointed out that “symbolic efficiency is a critical measure for several practical reasons.” “While hosting open weight models may be cheaper, this cost feature can be easily compensated if it requires more distinctive symbols for the cause of a specific problem.”

Efficiency is especially clear for large thinking models (LRMS), which is used extendedThought chains“To solve complex problems. These models, designed to think through step -by -step problems, can consume the thousands of symbols that think about simple questions that should require minimal account.
To get basic knowledge questions such as “What is the capital of Australia?” The study found that thinking models spend “hundreds of symbols that think about simple knowledge questions” that can be answered in one word.
What are the artificial intelligence models that actually offer the explosion
Revealing the search for flagrant differences between the model’s sponsors. Openai models, especially them O4-Mini And the newly open source GPT -SS Variables showed exceptional symbolic efficiency, especially for mathematical problems. The study found that the Openai models “highlighting an extreme symbolic efficiency in mathematics problems”, using up to three times the distinctive symbols of other commercial models.
Among the open source options, Nvidia’s Llama-3.3-Sunotron-SUPER-49B-V1 “More than one distinctive open weight model appeared in all fields”, while the latest models of companies such as Magistral “showed the use of an exceptionally high symbol” as extremist peaks.
The efficiency gap varied significantly according to the type of task. While open models use twice the number of distinctive symbols of mathematical and logical problems, the team has been amplified by simple knowledge questions where effective thinking should be unnecessary.

What needs institution leaders to know the costs of computing artificial intelligence
The results have immediate effects on the AI’s adoption, where computing costs can be expanded quickly with use. Companies that evaluate artificial intelligence models often focus on accuracy and prices for each person, but they may ignore the total arithmetic requirements for tasks in the real world.
“The best symbolic efficiency of closed weight models often compensates for the high prices of API for these models,” the researchers found when analyzing the total inference costs.
The study also revealed that closed source models appear to be active for efficiency. “Closed weight models have been repeatedly improved to use fewer symbols to reduce the cost of reasoning,” while open models “increased their symbolic use of latest versions, and may reflect priority towards better thinking performance.”

How researchers break the symbol to measure the efficiency of artificial intelligence
The research team faced unique challenges in measuring efficiency through different model structures. Many closed source models do not reveal raw thinking operations, instead providing compact summaries for their internal accounts to prevent competitors from copying their technologies.
To address this, the researchers used the distinctive symbols – the total accounting units that have been invoices for each query – as an alternative to thinking effort. They discovered that “the latest closed source models will not share the effects of raw thinking” and instead “use smaller language models to copy the series of thinking in compressed summaries or representations.”
The study methodology included the test with modified versions of well -known problems to reduce the effect of reserved solutions, such as changing variables in sports competition problems from Mathematics Exam for American Call (AIME).

The future of the efficiency of artificial intelligence: What will happen next
Researchers suggest that symbolic efficiency must become a fundamental goal in improvement along with accuracy to develop the future model. “The most intense COT will also use a more efficient context and may conflict with the deterioration of context during difficult thinking tasks,” They wrote.
Open source Openai version GPT -SS models,, Which shows modern efficiency with “COT that can be accessed freely”, can serve as a reference point for improving other open source models.
Full research data collection and evaluation code is Available on JabbapAllow other researchers to verify the authenticity and expand the results. Since the artificial intelligence industry towards the most powerful thinking capabilities, this study indicates that real competition may not be about who can build the smartest artificial intelligence – but who can build the most efficient.
After all, in a world where each symbol is important, you may find the most wasteful models themselves from the market, regardless of how much thinking is.
https://venturebeat.com/wp-content/uploads/2025/08/nuneybits_Vector_art_of_dollar_bills_burning_blue_halftone_phot_64f22ade-4971-4234-822b-ba7dab7461de.webp?w=1024?w=1200&strip=all
Source link