Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
Causing the reference model in Deepseek R1 Shock waves through the technology industryWith a more clear sign is surprising Selling the main Amnesty International shares. The feature of well -funded artificial intelligence laboratories like Openai and Anthropor are no longer very strong, as Deepseek was reported to have been able to develop their O1 competitor in a small part of the cost.
While some artificial intelligence laboratories are currently present StomachAs it comes to the institutions sector, it is often good news.
Cheaper applications, more applications
As we said here before, one of Trends worth watching in 2025 It is a continued decrease in the cost of using artificial intelligence models. Institutions must experience and build preliminary models with the latest artificial intelligence models regardless of the price, knowing that the continued reduction in prices will enable them to ultimately spread their applications on a large scale.
This trend has just seen a significant change. Openai o1 costs $ 60 per million directing symbols reverse $ 2.19 per million for Deepseek R1. And if you are worried about it Send your data to Chinese serversYou can access the R1 on the US service providers such as together and AI fireworksWhere it is $ 8 and $ 9 per million symbols, respectively – is still a huge deal compared to O1.
In order to be fair, O1 still has the edge more than R1, but not as much as this big difference in the price justifies. Moreover, R1 capabilities will be sufficient for most institutions applications. We can expect to issue more advanced and capable models in the coming months.
We can also expect second -class effects on the artificial intelligence market in general. For example, SAM Altman, CEO of Openai, announced that free Chatgpt users will soon be able to access the O3-MINI. Although R1 was not explicitly mentioned as a reason, the fact that the advertisement was issued shortly after the launch of the R1.
big news: the free tier of chatgpt is going to get o3-mini!
— Sam Altman (@sama) January 23, 2025
(and the plus tier will get tons of o3-mini usage)
More innovation
R1 still leaves a lot of questions unanswered – for example, there are multiple reports that Deepseek has trained the model on the outputs of large Openai language models (LLMS). But if its paper and technical report is correct, Deepseek has managed to create a model that is almost identical to the latest latest its latest while reducing costs and removing some technical steps that require a lot of handicrafts.
If others can reproduce Deepseek results, this may be good news for AI’s laboratories and companies that have been marginalized due to financial barriers in front of innovation in this field. Institutions can anticipate faster innovation and more artificial intelligence products to run their applications.
Today's "DeepSeek selloff" in the stock market — attributed to DeepSeek V3/R1 disrupting the tech ecosystem — is another sign that the application layer is a great place to be. The foundation model layer being hyper-competitive is great for people building applications.
— Andrew Ng (@AndrewYNg) January 27, 2025
What will happen to billions of dollars spent by large technology companies to obtain speeds of devices? We are still not reaching the ceiling of what is possible with artificial intelligence, so leading technology companies will be able to do more with their resources. In fact, more than artificial intelligence at reasonable prices will increase the demand for the medium to long term.
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of. https://t.co/omEcOPhdIz
— Satya Nadella (@satyanadella) January 27, 2025
But more importantly, R1 is evidence that everything is not related to larger calculations and data groups. With the correct engineering pieces and good talent, you will be able to pay the limits of what is possible.
Open source to win
To be clear, the R1 is not completely open, as Deepseek has released only the weights, but not the symbol or the full details of the training data. However, it is a big victory for An open source community. Since the release of Deepseek R1, more than 500 derivatives have been published on the Hugging Face, and the model has been downloaded millions of times.
It's been released just a few days ago and already more than 500 derivative models of @deepseek_ai have been created all over the world on @huggingface with 2.5 million downloads (5x the original weights).
— clem 🤗 (@ClementDelangue) January 27, 2025
The power of decentralized open-source AI!
Institutions will also give more flexibility around the location of their models. Regardless of the full model of 671 billion teachers, there are distilled versions of R1, ranging from 1.5 billion to 70 billion teachers, allowing companies to operate the model on a variety of devices. Moreover, Unlike O1R1 reveals a full thinking chain, giving developers a better understanding of the behavior of the model and the ability to direct it in the desired direction.
By catching up with the open specifications of closed models, we can hope to renew the commitment to exchange knowledge and research so that everyone can benefit from progress in artificial intelligence.
To people who think
— Yann LeCun (@ylecun) January 25, 2025
"China is surpassing the US in AI"
the correct thought is
"Open source models are surpassing closed ones"
See ⬇️⬇️⬇️
https://venturebeat.com/wp-content/uploads/2025/01/N4yPCzOATA-tZ5RAZT03Lw.webp?w=1024?w=1200&strip=all
Source link