The newest warrior in the battle of artificial intelligence is a major blow to large technology

Photo of author

By [email protected]


The continuous SlugFest between technology players may be racing to get the most easy and powerful artificial intelligence that has just had a short judge.

Peace that fell?

A new version of the increasingly impressive V3.1, which has a massive system of 685 billion of the parameter and can provide about $ 1.01 per complete coding task, compared to the price of $ 70 for traditional systems.

Deepseek is not strange to fill the world. Its R1 was launched last year and immediately surprised Artificial intelligence observers at its speed and accuracy compared to its western competitors, and it appears that V3.1 may follow its example.

This price point and the complexity of the service represent a direct challenge to the modern border systems of Openai and the Anthropor, both of which are located in the United States, and a confrontation between Chinese and American technology systems has been actively occurred for years, but you have such a huge participant from a much smaller company that may rhetoric in a new era of challenges. Alibaba Holding Ltd. MOONSHOT has also released American -technology Amnesthot models.

β€œWhile many people realize the accomplishments of Debsik, this only represents the beginning of the innovation wave in China,” Louis Liang, an Amnesty International Sector Investor with AABA CAPITAL, Bloomberg said. “We are witnessing the emergence of the collective adoption of Amnesty International, and this goes beyond national competition.”

Why any of this issue?

The complete Deepseek approach to how artificial intelligence works is different from the way most American technology companies treat the idea. It can transform global competition from competition that focuses on access instead of strength, Venturebeat reports.

It also represents a challenge to giants such as Meta and Alphabet by processing a larger amount of data, which makes a “context window” larger, which is the amount of text that the model can take when answering a query. This is important for users because it enhances the model’s ability to maintain a concept in long conversations, use memory to complete the complex tasks that you have done before, and to understand the extent to which different parts of the text are linked to each other.

More importantly, users love him.

Another major prize? Deepseek’s v3.1 Fulfill 71.6 % on Standard codingA big victory, given that it was not first displayed on the famous artificial intelligence tools test Embroidery Last night, other competitors such as Openai’s Chatgpt 4.5 were blown up, which recorded 40 %.

“Deepseek v3.1 records 71.6 % on Aider-Non-Reading Sota,” Andrew ChristiansonHe added, “1 % more than Claude Obus 4 while it was 68 times cheaper.” Deepseek’s achievement in Rarefied, where the performance levels that were previously intended for the most expensive property systems are matched.





https://gizmodo.com/app/uploads/2025/08/Elon-Musk-and-Sam-Altman-1200×675.jpg

Source link

Leave a Comment