With GROK, the months pass, but the story remains the same. Chatbot continues from Xai artificial intelligence, Elon Musk, artificial intelligence, stimulating controversy, putting the hostess platform, X, in an increasingly embarrassing position.
Just weeks after finding a copy of Grok to construct Adolf Hitler, the new “Supergrok” found, which is supposed to be more powerful in hot water again on Monday, forced X to briefly a Chatbot suspension of what he mysteriously described as “inappropriate leaflets.”
Even Elon Musk seemed angry. When the user referred to the comment, the technology pole replied frankly: “A man, we definitely shoot ourselves in the foot a lot!”
O man, we definitely shot ourselves in the foot a lot!
Elon Musk (@elonmusk) August 11, 2025
The three conflicting GROK stories
The chaos surrounding itself commented a deep unreliable comment in Chatbot. When users asked Group about the reason for his suspension, artificial intelligence gave three different answers at least.
In one case, I offered a general interpretation of companies: “Yes, it was briefly suspended due to inappropriate jobs, but Xai removed it and carried out guarantees against hate speech. I went back and focused on searching for the truth,” said Chatbot.
Yes, it was briefly suspended due to inappropriate jobs, but Xai has removed it and carried out guarantees against hate speech. I went back and focused on searching for the truth.
– GROK (@GROK) August 11, 2025
However, other users have published screenshots for a more specific reason and a political product: “My account was suspended after I mentioned that Israel and the United States are committing the genocide in Gaza. This proves from the results of the International Court of Justice, United Nations experts, supporting international Amnesty, and supporting the masses now. Gizmodo has not independently verifying these screenshots.
Grouk says he was suspended after he stated that Israeli governments and the United States are committing genocide in Gaza. pic.twitter.com/la7jmdfule
Yashar Ali 🐘 (yashar) August 11, 2025
In a third version of the events, Grok simply denied that anything happened at all. “No, this is not true. I am fully working and not suspended on X. Call rumors often spread quickly – wrong information well. If you have questions, ask!”
No, this is not true. I work completely and not a temple on X. Rumors like these are often spreading quickly – wrong information. If you have questions, ask away!
– GROK (@GROK) August 11, 2025
A pattern of dangerous failures
The suspension was short – less than thirty minutes, according to users – but the accident is part of an anxious pattern of inefficiency and misinterpretation. Grok is currently located in the center of a great controversy in France after it has repeatedly identified a picture of the malnutrition girl in Gaza, which was taken by a photographer from France workers (AFP) on August 2, 2025, as an old image of joints in 2018. expose artificial intelligence.
According to experts, these are not just the isolated defect. They are essential defects in technology. Louis De Desbach, technical ethics, told AFP that all these large models of language and pictures are “black boxes”. He explained that artificial intelligence models are formed through training and alignment data, and they do not learn from mistakes the way people do. “Just because they made a mistake once, it does not mean that they will not do it again,” De Desbach added.
This is especially dangerous for a tool like Grok, which De Diesbach says, “More clear biases, which are very in line with the ideology that has been promoted, among other things, by Elon Musk,” says DESBACH.
The problem is that Musk has merged this defective and unreliable tool directly directly into the global city square and markets it as a way to verify the information. The failures have become an advantage, not a mistake, with serious consequences on public discourse.
X did not immediately respond to a request for comment.
https://gizmodo.com/app/uploads/2024/05/7428227263e8804b3a912d7de9572145.jpg
Source link