Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
It is here: Openai announced the GPT-4.5 versionThe research inspection of the latest and strongest LLM for chat applications-but clearly, the company specifically says in the form system card (a report that explains its capabilities, which has leaked early online, and it is attached to the bottom of this piece) that “GPT-4.5 is not a model of the interface, but it opens the largest LLM in GPT-4.
It is also not a “thinking model”, the new models category presented by Openai, Deepseek, Deepseek and many others that produce “idea chains”, or a awareness flow like text blocks in which they are reflected in their assumptions and their own conclusions to try to catch mistakes before offering responses/outputs to users. It is still more than classic LLM. He is also confused, The most expensive Openai is the most expensive (more below).
However, Acording L Participant founder and CEO of Openai Sam Altman on the social network xGPT-4.5 is: “The first model that seems to speak to a thoughtful person. I have gone through several moments where I sat on my seat and was amazed at good advice from artificial intelligence.”
However, he warned that the company collides against the upper side of its supplies from GPU processing units (GPU) and had to limit access as a result:
“Bad news: It is a giant and costly model. We really wanted to launch it to Plus and Pro at the same time, but we are growing a lot and excelled in graphics processing units. We will add tens of thousands of graphics processing units next week and put them to the extra layer at that time. (Hundreds of thousands are coming soon, and I am sure you will use everyone we can raise.)
This is not the way we want to work, but it is completely difficult to predict growth increases that lead to a lack of graphics processing unit.“
Starting today, GPT-4.5 is available for subscribers The most expensive subscription in OpenaiChatgpt Pro (USD $ 200 a month), developers at all paid API levels, with plans to expand access to Plus levels much less expensive ($ 20/30 dollars per month) next week.
GPT – 4.5 can access Search and Openai from Canvas Canvas, and users can download files and photos to them, but do not contain other multimedia features such as sound, video and screen screen screen screen – so far.
Openai hosts a direct broadcast event today at 12 pm PT/ 3 PM ET, where Openai researchers will discuss the development and capabilities of the model.
Artificial intelligence is presented with not subject to supervision
GPT-4.5 is a step forward in training artificial intelligence, especially in unprepared learning, which enhances the model’s ability to identify patterns, call communications, and generate creative visions.
During the live broadcast, Openai researchers noticed how he was trained in the data created by smaller artificial intelligence models and that this improves the “world model”. They also mentioned that it was previously trained through multiple data centers, indicating a centralized approach similar to the Rival Nous Research approach.
This training system seems to help GPT-4.5 to learn to produce more natural and intuitive reactions, follow the user’s intention more accurately, and show greater emotional intelligence.
The model depends on the previous Openai’s works in the scaling of artificial intelligence, which enhances the idea that increasing data and energy account leads to a better performance than artificial intelligence.
Compared to its predecessors, GPT-4.5 is expected to produce less hallucinations, making it more reliable across a wide range of topics.
What makes GPT-4.5 stand out?
According to Openai, GPT-4.5 is designed to create warm, intuitive and naturally flowing conversations. It has a stronger understanding than the differences and context, providing more human -like reactions and a greater ability to cooperate effectively with users.
The base of the expanded knowledge of the model and its improved ability to explain the exact signals of excellence in various applications, including:
• Writing help: Improving content, improving clarity, and generating creative ideas.
• Programming support: Correlation of errors, suggesting code improvements, automating workflow tasks.
• Problem solution: Provide detailed explanations and help in making practical decisions.
GPT-4.5 also includes new alignment techniques that enhance their ability to understand human preferences and intention, which improves user experience.
How to get to GPT-4.5
Starting today, Chatgpt Pro users can select the GPT-4.5 in the web, mobile and desktop models. Next week, Openai will start offering to Plus and team users.
For developers, GPT-4.5 is provided by API from Openai, including API API, Applicans API and API. It supports the main features such as summoning jobs, structured outputs, flow, system messages, and image inputs, making it a multi -use tool for the various applications that AI drives. However, it currently does not support multimedia capabilities such as sound, video or screen sharing.
Pricing and implications for decision makers for institutions
Companies and team leaders will benefit greatly from the capabilities that were presented with GPT-4.5. Thanks to its reinforced reliability and normal capabilities, GPT-4.5 can support a wide range of job functions:
• Improving customer participation: Companies can integrate GPT-4.5 into support systems for faster and natural reactions.
• Getting augmented content: Marketing and communications teams can produce high quality content, on the brand efficiently.
• Simplified operations: Acting automation can help correct errors, improve workflow, and make strategic decision.
• Expansion and allocation: The application programming interface allows specially designed applications, allowing institutions to create an AI -based solutions for their needs.
At the same time, GPT-4.5 pricing through API Openai’s For third-party developers looking to create applications on the model, it looks shockedly high, at $ 75/180 dollars per million input/output symbols compared to $ 2.50/10 dollars for GPT-4O.
And with other recently released competing models-from Clauds’s Claude 3.7 to Google Gemini 2 Pro to the OpenAi (O1, O3-Mini Hight, O3) series-the question will be if the GPT-4.5s value is worth the relatively high cost, especially through the application programming interface.
Early reactions from their colleagues differ from artificial intelligence and energy users on a large scale
The GPT-4.5 version has sparked mixed reactions from artificial intelligence researchers and technology lovers on the social network X, especially after the system card leakage revealed standard results before the official announcement.
Teknium (@teknium1)The co -founder of the competitive founder of the NOUS rival model has expressed his disappointment in the new model, indicating the minimum improvements in MMLU (multi -language understanding) and coding standards in the real world compared to the leading LLMS.
“It has passed 2+ years and 1000 times from the capital has been published more of the capital since GPT-4 … What happened?” He asked.
Others have noted that GPT-4.5 is weak for the Openai’s O3-MINI model in software engineering standards, raising questions about whether this version is a great progress.
However, some users defended the capabilities of the model beyond raw standards.
Software developer Haider (Slow_Deverper) The 10x GPT-4.5 mathematical efficiency improvement and its strongest general purposes compared to O-Series models focus on STEM in Openai.
Artificial Intelligence News poster Andrew Koran (@andrewcurran_) Taking a more qualitative view, expecting that GPT-4.5 would set new criteria in writing and creative thought, describing it as “OPUS” from Openai.
These discussions emphasize a broader discussion of artificial intelligence: Should the progress be measured purely in the criteria, or doing qualitative improvements in thinking, creativity and human -like interactions have more value?
Search preview
Openai places GPT-4.5 as a research inspection to gain deeper visions in its strengths and restrictions. The company is still committed to understanding how users interact with the model and determine unexpected use cases.
“We share GPT-4.5 as a research inspection to better understand its strengths and restrictions,” Openai. “The scope of not subject to supervision continues to advance the progress of artificial intelligence, and to improve accuracy, fluency and reliability.”
As Openai continues to improve its models, GPT-4.5 works as a basis for future artificial intelligence developments, especially in thinking factors and tools. While GPT-4.5 already explains great capabilities, Openai evaluates its long-term role within its ecological system.
Through the broader knowledge base, improving emotional intelligence and natural conversation capabilities, GPT-4.5 has been assigned to provide significant improvements to users across various fields. Openai is keen to know how to integrate developers, companies and institutions model in their workflow and their applications.
As AI continues to evolve, GPT-4.5 represents other teachers in Openai’s pursuit of more linguistic models, reliably and aligned for users, and promised new opportunities to innovate in the institution’s scene.
https://venturebeat.com/wp-content/uploads/2025/02/gpt-4-5-lead-header.png?w=1024?w=1200&strip=all
Source link