A group of 60 legislators signed in the United Kingdom Open message accusation Google Deepmind is a violation of its obligations to the integrity of artificial intelligence with the launch of Gemini 2.5 Pro. The message, published by the Pistachii Activist Group, accuses the artificial intelligence company to break safety obligations from artificial intelligence that it signed at an international summit in 2024 of not launching it Artificial intelligence model with major safety information.
At an international summit carried out by the United Kingdom and South Korea in February 2024, Google and other signatories promised to “report the capabilities of their models openly and risk assessments, as well as reveal whether external organizations, such as government intelligence intelligence institutes, have participated in the test.
However, when the company Gemini 2.5 Pro released in March 2025, the company failed to publish a typical card, a document that separates the main information on how to test models. This was despite the company’s assurances that the new model surpassed competitors on industry standards through “meaningless margins”. Instead, the artificial intelligence laboratory released a simplified model with a six -page model with three weeks after the model was made for the first time as a “inspection” version. At that time, one Artificial Intelligence Governance expert launched this “minimal” report And “worrying”.
The message described the delay of Google “not honoring” the company’s commitment to the summit and “a disturbing violation of confidence with governments and the public.” The message also faced what is called a “model card” that lacks “any fundamental details about external assessments”, as well as Google’s refusal to confirm whether government agencies such as the AI Security Institute in the United Kingdom (AISI) have participated in the test.
In a statement sent to luck On Friday, a Google DeepMind spokesman said that the company was standing alongside “transparency, testing and reporting” and it meets its general obligations, including Seoul Frontier AI.
“As part of our development process, our models are subject to strict safety tests, including AISI in the United Kingdom and other third-logistan 2.5 laboratory laboratory 2.5,” the statement said.
When Google released an inspection for the first time from Gemini 2.5 Pro, critics said that the lost system card seems to be violated by many other pledges made by the artificial intelligence company, including the White House obligations 2023 and voluntary rules of artificial intelligence signed in October 2023.
In May, the company had said that a more detailed “technical report” would come later when it is issued a final version of “GIMINI 2.5 Pro” fully available to the audience. Company It appears to be a longer report in late June, After months of issuing the full version.
Google is not the only company to sign these pledges, then it seems that it is retracted from safety disclosures. The Meta Model Card for the Frontier Llama 4 model was summary and limited in detail such as the Google Card released for GEMINI 2.5 Pro, and it also has attracted criticism from artificial intelligence safety researchers.
Earlier this year, Openai announced that it will not publish a report on the technical safety of the new GPT-4.1 model. The company argued that GPT-4.1 is “not a model of borders”, because its logic-focused systems such as O3 and O4-MINI outperform many criteria.
The last message on Google invites to reaffirm its commitment to the integrity of artificial intelligence, and ask the technology company to determine the publication clearly as a point when the model becomes available to the public; Commitment to publish safety evaluation reports on a specific timetable for all future model versions; Providing complete transparency for each version by naming government agencies and independent third parties participating in the test, as well as the timelines for accurate testing.
“If the leading companies such as Google deal with these obligations as optional, we risk a dangerous race to increase artificial intelligence increasingly without appropriate guarantees,” said Lord Brown from Lyditon, a member of the House of Lords and one of the signers of the letter, in a statement.
https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2198130037.jpg?resize=1200,600
Source link