Selling coffee beans to Starbuck

Photo of author

By [email protected]


How many basis models do?

It may seem to be a ridiculous question, but it appeared a lot in my conversations with the startups of artificial intelligence, which are increasingly comfortable with companies that were rejected as “GPT covers”, or companies that build the front facades of the current AI models such as ChatGPT. These days, startup teams focus on allocating artificial intelligence models for specific tasks and interface work, and you see the foundation model as a commodity that can be switched and exit as necessary. This approach was displayed especially in Boxworks last week The conference, which seemed fully dedicated to the program facing the user based on artificial intelligence models.

Part of what this pays is that the benefits of scaling for pre-training-that initial process of teaching artificial intelligence models using huge data collections, which are the only field for basic models-slowdown. This does not mean that artificial intelligence has stopped making progress, but the early benefits of excessive constituent models have reached decreased returns, and the turning of attention to learning after training and reinforcement as sources of progress in the future. If you want to better create an Amnesty International coding tool, it is best to refine the interface instead of spending a few billion dollars at the time of the server when pre -training. As the success of the Claude symbol in the anthropor, the foundation model companies are very good in these other areas as well – but it is not a permanent advantage as it was.

In short, the competitive scene of Amnesty International is changing in ways to undermine the benefits of the largest artificial intelligence laboratories. Instead of the race for the strong AGI that can match or exceed human capabilities in all cognitive tasks, the direct future appears to be a wave of separate companies: software development, institution data management, images generation, etc. Aside from the first engine feature, it is not clear that building the foundation model gives you any advantage in those companies. Worse, the abundance of open source alternatives means that the basic models may not have any price influence if you lose competition in the application layer. This would transform companies such as Openai and Anthropor into rear suppliers at a low-margin-sideline company-as one founder told me, “like selling coffee beans to Starbucks.”

It is difficult to overestimate this dramatic transformation of artificial intelligence. Throughout the contemporary mutation, the success of artificial intelligence was inconsistent with the success of the corporate institution models – specifically, Openai, anthropic, and Google. Being upward on artificial intelligence means the belief that AI’s transformational impact will make it in the first important companies. We can discuss the company that will be issued at the forefront, but it was clear that some of the baseline model company will end with the Kingdom’s keys.

At that time, there were many reasons for the belief that this was true. For years, the development of the foundation model was the only work of artificial intelligence – and the quick speed of progress made the initiative seemed uninterrupted. Silicon Valley has always had a deep love for the platform feature. The assumption was that, however, Amnesty International’s models ended up making money, Assad’s share of interest will decline to the basic model companies, which did the work that was difficult to repeat.

Last year made this story more complicated. There are a lot of successful artificial intelligence services by a third party, but they tend to use basic models by exchange. For startups, it is no longer important whether their product is sitting on top of GPT-5, Claude or Gemini, and they expect to be able to switch models in the middle of the version without the final users notice the difference. The basic models continue to make real progress, but it is no longer reasonable for any one company to maintain a great feature enough to control the industry.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

We already have a lot of noting that there is not much the first engine feature. The project capitalist Martin Casado of A16Z indicated on Modern podcastOpenai was the first laboratory to launch a coding model, as well as the obstetric models of photos and video – only for the loss of the three groups of competitors. “As much as we can say, there is no trench rooted in the technology stack for artificial intelligence.”

Of course, we should not calculate the typical companies for the basis yet. There are still a lot of strong advantages alongside them, including brand recognition, infrastructure, and vast unimaginable cash reserves. Openai for consumers may be more difficult than repeating its coding work, and other benefits may appear with the maturity of the sector. Looking at the rapid pace for developing artificial intelligence, it can reflect the current interest after training in the next six months. The most lack of everything, the race towards general intelligence can bear fruit with new breakthroughs in drug science or material science, which radically changes our thoughts about what makes artificial intelligence models valuable.

But in the meantime, the strategy of building the models of a bilateral institution seems much less attractive than it was in the past year-and the upper spending in Meta began billions of dollars in risk.



https://techcrunch.com/wp-content/uploads/2023/05/GettyImages-1157614908.jpg?resize=1200,800

Source link

Leave a Comment