Elon Musk’s Ai, Xai, It was absent from the deadline that it imposed To spread artificial intelligence integrity framework, the International Midas Project.
Xai is not fully known for its strong obligations for the integrity of artificial intelligence because it is usually understood. Modern report I found that AI Chatbot, GROK will take off the pictures of women when asked. It can be Grok as well More largely From Chatbots such as Gemini and ChatGPT, cursed without a major seizure of talking about it.
However, in February at the AI Seoul Summit, a global gathering of artificial intelligence leaders and stakeholders, published XAI A. Frame project Determine the company’s approach to the integrity of artificial intelligence. The eight eight pages of Xai’s priorities and philosophy, including the company’s standards and considerations of publishing the artificial intelligence model.
The Midas project also indicated in a blog post on Tuesday, however, the draft applies only to the non -specified artificial intelligence models only “currently not under development.” Moreover, it failed to express how to identify XAI and implement risk mitigating, which is an essential component of the company’s documents I signed at the AI Seoul Summit.
In the draft, Xai said it plans to issue a revised version of the safety policy “within three months” – by May 10. The deadline came and went without recognition on the official channels of Xai.
Despite the frequent Mousse warnings of the risks Amnesty International gold undefinedXAI has a safety record of artificial intelligence. A recent study conducted by Saferai, a non -profit organization aimed at improving AI Labs accountability, found. “Very weak” risk management practices.
This does not mean that other artificial intelligence laboratories are greatly better. In recent months, the Xai competitors including Google and Openai have Safety test rushed Be Slow Model safety reports (or Publishing reports were crossed completely). Some experts have expressed concern that the lack of ventilation on safety efforts apparently at a time when artificial intelligence is more capable – and therefore possible – ever.
https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-2194754542.jpg?w=1024
Source link