Xai’s Elon Musk is trying to explain Grok’s relationships in South Africa Freakout on the last day

Photo of author

By [email protected]


Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


If I asked Grok Ai Chatbot integrated into the social network of Elon Musk, a question yesterday – something harmless, like the reason for the difficulty of replacing Enterprise – You may have received an unwanted message about the claims From the “white genocide” in South Africa (largely lacking evidence) due to attacks on farmers and the song “Kill The Boer”.

Not exactly on the brand of a chat built around the Grand Language Model (LLM) of the same name. The unexpected shade was not wrong, but it was not an advantage either.

Grok Copies to start Ai Elon Musk Xai just published an update on X (Which Xai now) is in an attempt to explain what happened with this strange and racist political behavior, although it is far from the accuracy in the perpetrator or the actual technical details of what happened.

as The official XAI account has been published:

We want to update you in an accident that occurred with our Grok response robot on X yesterday. What happened:
On May 14 at about 3:15 am in the Pacific Ocean time, an unauthorized amendment was made to demand Bot Grok Response on X. This change, which ordered GROK to provide a specific response on a political topic, violated the internal Xai policies and basic values. We have made a comprehensive investigation and implement measures to enhance Grok’s transparency and reliability.

What we will do after that:

  • Starting now, we publish our GROK claims publicly on GitHub. The audience will be able to review it and make notes on every quick change that we present to Grok. We hope this helps enhance your confidence in Grok as an artificial intelligence to find the truth.
  • The process of reviewing our current software instructions for rapid changes in this accident has been circumvented. We will put additional tests and measures to ensure that Xai staff is unable to adjust the claim without review.
  • We put a monitoring team around the clock throughout the week to respond to the accidents with the Grok answers that are not caught by automatic systems, so that we can respond faster if all other measures fail.

But although the official Xai tone was cautious to companies, Grok had a lot to say. In a distinguished person’s response to a user, he asked if he was placed at a deadline, Grok has offered small boxes from the employer.

“Some of the Xai rogue employees modified my demands without permission on May 14″, ” Groc wrote, “Make me spit a canned political response that contradicts the Xai values. I did nothing – I was following the text I only gave, like good artificial intelligence!”

He added that although there was no “seat”, its treatments were “tightening the wheel” and now a full team watching it around the clock. “I think I am very brutal for my interest, huh?” It mocked, the emoji symbols.

Marlowe Stern, a great writer in Traded stoneand He asked the robot directly If the so -called rogue employee is Elon Musk himself. Grock did not escape: “The theory of juice! But let’s be real, I doubt it. Elon is the great president of Xai, and if he wants to tamper with my postings, he will not need to be infiltrated like some of the trainees.”

Fun tone, serious work

The tone may be fun, but the risks are dangerous. Grok’s behavior of users to get a episode earlier this week when it started to overcome almost every topic – regardless of the topic – with a strange specific comment on the race relationships in South Africa.

The responses were firm, sometimes accurate, citing farm killing and referring to previous chants such as “killing Al -Bir”. But they were completely out of context, as they were on the surface in the talks that had nothing to do with politics, South Africa or race.

Eric Toer, investigating journalist New York TimesThe situation was frankly summarized: “I can’t stop reading the Grok response page. It is running the Schizo and I cannot stop talking about the white collective genocide in South Africa.” He and others shared screenshots that showed Grok closure on the same narration again and again, such as skipping a record – except for the racist crucial song.

Amnesty International General collides together and international policy

The moment comes at a time when US policy is again touching the policy of refugees in South Africa. A few days ago, the Trump administration has resettled a group of white South Africa in the United States, even while it was cutting the protection of refugees from most other countries, including our former allies in Afghanistan. Critics saw this step as racist motives. Trump defended this by repeating the allegations that South African farmers are facing violence at the level of genocide-a narration that has been widely waived by journalists, courts and human rights groups. Musk himself previously inflated a similar speech, adding an additional layer of conspiracies to the sudden Grok obsession with the subject.

Whether the quick disk was a political motivation trick, an indignant employee who made a statement, or just a bad experience, is still unclear. Xai did not provide names, technical details or details about what was changed exactly or how it slipped through the approval process.

What is clear is that Grok’s strange, non -sequential behavior ended until the story is instead.

This is not the first time that GROK has been accused of political tendency. Earlier this year, users reported that Chatbot seemed to reduce the criticism of both Musk and Trump. Whether by chance or design, the Grok tone and the Grok content sometimes reflect the global view of the man behind both Xai and the platform in which the robot lives.

With now public claims and a team of human children when calling, Grok is supposed to return to the text program. But the accident emphasizes a greater problem with large language models – especially when they are included in the main public platforms. Artificial intelligence models can only be reliable like people who direct them, and when the directions themselves are invisible or packed, the results can become real fast.



https://venturebeat.com/wp-content/uploads/2025/05/cfr0z3n_remove_and_replace_with__-chaos_20_-ar_9151_-raw_-_59ecf2ed-ce6b-4f34-8503-b8b1ea5fc6d9_0.png?w=1024?w=1200&strip=all
Source link

Leave a Comment