Openai I launched a new web page called HUB safety reviews To exchange information publicly related to things like hallucinations in their models. The axis will also emerge whether the model produces harmful content, and how it behaves well as it is dedicated and try to break the prisons.
The Technology Company claims that this new page will provide additional transparency on Openai, a company, for context, faced, for context, Multiple lawsuits On the pretext that it uses illegal copyright materials to train artificial intelligence models. Oh, yes, and it should be noted New York Times Claiming a technology company Emplastic evidence by mistake In the case of plagiarism in the newspaper against it.
The safety evaluation center aims to expand the Openai system cards. They only define safety measures in development at launch, while the axis must provide continuous updates.
“With the development of artificial intelligence evaluation science, we aim to share our progress in developing more developmental ways to measure typical and safety,” says Openai in its announcement. “By sharing a sub -set of our safety evaluation results here, we hope that this is not only easy to understand the safety performance of Openai with time, but also support society’s efforts – to increase transparency throughout the field.” Openai adds that his work to get more proactive communication in this field throughout the company.
Enter HUB for safety assessments – a supplier to explore safety results for our models.
While the system cards share safety standards at all, the axis will be updated periodically as part of our efforts to communicate proactively on safety.https://t.co/c8NGMXLC2Y
Openai (Openai) May 14, 2025
Interested parties can look at each of the axis sections and see information about relevant models, such as GPT-4.1 to 4.5. Openai notes that the information provided in this axis is only a “snapshot” and that the parties concerned must look at their system cards. Other reviews and versions of more details.
One of the adults Boot To the entire safety evaluation center is that Openai is the entity that does these tests and choose the information that must be publicly shared. As a result, there is no way to ensure that the company shares all its problems or fears with the public.
https://s.yimg.com/ny/api/res/1.2/2yU04M5eLiXOCYwywz6akA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD03MzM-/https://s.yimg.com/os/creatr-uploaded-images/2025-04/63ceece0-1adf-11f0-bf99-e47caed48773
Source link