OpenAI updates the new API RESEPS Reseps with MCP support, GPT-4O Image Image Gen, and more Foundation’s features

Photo of author

By [email protected]


Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


Openai is Offering a set of important updates for him New responses APIIt aims to facilitate developers and institutions to build smart agent applications directed towards work.

These improvements include remote support McP context protocol (MCP) serversMerging photo generation tools and software translator tools, and promotions to provide research capabilities – all available from today, May 21.

Firstly It was launched in March 2025API works on API as an Openai toolbox for third -party developers to create Agentic applications at the top of some basic functions of Hit Catgpt and first artificial intelligence agents Deep search and Operator.

In the months that have passed since its first appearance, it processed trillion symbols and supported a wide range of use cases, from market and education research to software development and financial analysis.

Common applications designed with API include coding agent in Zencoder, RIVI market intelligence assistant, and Magicschool educational platform.

Basis and the purpose of API responses

The responsibility program programming interface appeared for the first time alongside open source Openai SDK agents in March 2025, as part of an initiative to provide the access of the third party developer to the same technologies that operate AI agents in Openai such as Research and Deep Research.

In this way, startups and companies outside OpenAI can integrate the same technology that they provide through Chatgpt in their own products and services, whether internal to use employees or external customers and partners.

Initially, the API merged the elements from completing the chat and API-aides that hand over the integrated tools to search for the web and files, as well as use the computer-for developers to build independent workflow tasks without the logic of complex coincidence. Openai said at the time that API to complete the chat will be neglected by mid -2026.

API provides clear responses in typical decisions, access to data in actual time, and integration capabilities that allowed agents to recover and deal with information.

This launch represents a shift towards giving developers a unified tool group to create an Amnesty International Factors ready for production, and its occurrence with minimal friction.

McP’s support for a remote integration capabilities

A major addition to this update is to support MCP servers remotely. Developers can now connect Openai models to tools and external services such as Stripe, Shopify and Twilio using only a few lines of software instructions. This allows the creation of agents who can take action and interact with users of the systems that already depend on them. To support this advanced ecosystem, Openai joined the MCP Steering Committee.

The update brings new tools integrated to the responses programming interface that enhances what agents can do within one API call.

The original GPT-4O variable is now available to generate images from OpenAi-which inspired a wave of anime amelos “GHibli” around the web and Openai servers with its popularity, but it is clear that it can create many other photo patterns-through the “GPT-DIMAGE-1” application. It includes possible and somewhat impressive new features such as actual time broadcasting and multi -turn.

This enables developers to create applications that can produce images and edit them dynamically in response to the user’s entry.

In addition, the code translator tool has now been combined into an interface programming interface, allowing models to deal with data analysis, complex mathematics, and logic -based tasks in thinking processes.

The tool helps improve the performance of the model through various technical standards and allows the behavior of the most advanced factor.

Improving the search for files and addressing context

The job search function is upgraded. Developers can now conduct searches via multiple vector stores and apply feature -based filter to recover only the most relevant content.

This improves the accuracy of the use of information agents, and enhances their ability to answer complex questions and operate within the areas of large knowledge.

The reliability of new companies and transparency features

Several features are specially designed to meet the institution’s needs. The background placing the long -term simultaneous tasks, dealing with deadlines or network interruptions during intense thinking.

Thinking summaries, new addition, explanations for the natural language of the internal thinking process of the model, and help correct errors and transparency.

Coding thinking elements provide an additional privacy layer for zero data retaining customers.

These models allow previous thinking steps without storing any data on Openai servers, and improving both safety and efficiency.

The latest capabilities are supported by the GPT-4O series of Openai, the GPT-4.1 series, and the Series Models O, including O3 and O4-MINI. These models now maintain a state of thinking through calls and multiple requests for tools, which leads to more accurate responses at a lower cost and writing.

Yesterday’s price is the price of today!

Despite the extensive features group, Openai confirmed that the prices of new tools and capabilities within the API responses will remain consistent with the current prices.

For example, the symbol translator tool is priced at $ 0.03 per session, and file search bills are made of $ 2.50 per 1000 calls, with a storage costs of $ 0.10 per GB per day after the first free GB.

Web search pricing varies based on the size of the context of the model and research, ranging from $ 25 to $ 50 per 1000 calls. Features on the generation of images are also imposed through the GPT-IMAGE-1 tool according to the decision and the quality of quality, starting from $ 0.011 per image.

The bills of all the use of tools are released at the rate of the individual’s chosen model, with no additional mark on the newly added capabilities.

What is the following for API responses?

With these updates, Openai continues to expand what is possible with API responses. Developers can access a richer set of tools and ready -made features of institutions, while institutions can now create more integrated, capable and safe applications on AI.

All features are directly from May 21, with pricing and implementation details through Openai documents.



https://venturebeat.com/wp-content/uploads/2025/05/ChatGPT-Image-May-21-2025-11_23_37-AM.png?w=1024?w=1200&strip=all
Source link

Leave a Comment