The “white genocide” responses of Grok Gen Ai are tampered with “in Will”

Photo of author

By [email protected]


Muhammad Salim Korkutata Anatolia Gety pictures

In the years that increased its excess since it has seized obstetric artificial intelligence in the world after the storm after the general release of ChatgptTrust was a permanent problem.

HallucinogenicBad mathematics and cultural biases have afflicted the results, and remind users that there is an end to the amount that we can rely on on artificial intelligence, at least at the present time.

Elon Musk Grok Chatbot, which was created by Xai starting start, showed that there is a deeper cause of anxiety: artificial intelligence can be easily treated by humans.

Grock on Wednesday Start Responding to user inquiries with wrong allegations of “white genocide” in South Africa. By late today, the screenshots were published across X of similar answers even when the questions have nothing to do with the topic.

After silence for more than 24 hours, Xai said late on Thursday that Grock antic The reason for a “unauthorized amendment” was to the so -called alleged system claims in the chat application, which helps to inform the way in which he behaves and interacts with users. In other words, humans dictated the response of Amnesty International.

The nature of the response, in this case, is directly related to Musk, which was born and raised in South Africa. Musk, who owns Xai in addition to its executive roles in Timing And Spacex, it was Enhancing the wrong claim This violence against some farmers in South Africa constitutes a “white genocide”, which is a feeling President Donald Trump It was also expressed.

Read more report on CNBC on artificial intelligence

“I think it is incredibly important because of the content and who leads this company, and the methods that suggest it or shed light on a kind of strength that these tools must form people’s thinking and understanding the world,” said Derdari Moligan, a professor at the University of California at Berkeley and an AI governance expert.

Molijan described Grok’s mistakes as “the breakdown of my algorithm” that tends to “layers” of a neutral nature assumed for large language models. She said there is no reason to see Grok’s defect just a “exception.”

Chatbots, which is made of artificial intelligence created by Deadand Google Moligan said that Openai is not “mobilization” information in a neutral way, but instead passes through “a group of candidates and values ​​integrated into the system.” Grok’s collapse provides a window on the ease of changing any of these systems to meet the agenda of the individual or group.

Xai, Google and Openai representatives did not respond to the suspension requests. Mita refused to comment.

Different from the previous problems

Xai said in changing GROK. statementHe violated “internal policies and basic values”. The company said it would take steps to prevent similar disasters and will publish the application system claims in order to “enhance your confidence in Grok as AI that seeks the truth.”

It is not the first error in the artificial intelligence of the virus online. A decade ago, Google Photo app The African Americans erred in Gorilla. Last year, Google temporarily Temporarily The Gemini AI images feature after admitting that it was “inaccuracy” in historical photos. Some users of Dall-E from Openai accused the signs of bias in 2022, which prompted the company to Advertise It was implementing a new technology, so the images “reflect accurately the diversity of the world’s population.”

In 2023, Forster expressed 58 % of artificial intelligence makers in companies in Australia, the United Kingdom and the United States of hallucinations in spreading artificial intelligence. The survey in September of that year included 258 of the respondents.

The expert says that Musk's ambition with Grok 3 is political and financially paid.

Experts told CNBC that the Grok incident reminds us of the Chinese Deepseek, which became The feeling overnight In the United States earlier this year due to the quality of its new model, according to what was reported at the cost of its competitors in the United States.

Critics have said this deep Control topics Consider a sensitive to the Chinese government. Like China with Depsic, musk appears to affect the results based on his political views, they say.

When xi First In November 2023, Musk said he was supposed to have a “little intelligence”, a “rebel chain” and to answer the “hot questions” that competitors might rotate. In February, Xai Blame Engineer for changes that have suppressed Grok’s responses to the user’s questions about wrong information, keep the Musk and Trump names of responses.

But Grock obsessed with the “white genocide” in South Africa is more extreme.

Petar Tsankov, CEO of Ai Model Auditing Latticeflow Ai, said that Grok bombing is more surprising than we saw with Deepseek because one “is somewhat expected that there will be a kind of manipulation from China,” said Petar Tsankov, CEO of Ai Model Auditing Latticeflow AI.

Tsankov, whose company is in Switzerland, said the industry needs more transparency so that users can understand how to build companies and train their models and how this affects behavior. He pointed to the efforts made by the European Union to request more technology companies to provide transparency as part of the broader regulations in the region.

Tsankov said that without a general cry, “We will not be able to spread safer models,” and “people who will pay the price” will be to put their confidence in the companies that develop them.

Mike Ghaltiri, an analyst in Forster, said the Grok disaster is unlikely to slow down the growth of users in Chatbots, or reduce the investments that companies flow in technology. He told users a certain level of acceptance of these types of events.

“Whether the matter, Chatgpt or Gemini – everyone expects it now,” said Gawwalieri. “They were told how the models were cheerful. There is an expectation that this will happen.”

Olivia Gamblin, the ethics of Amnesty International and the author of the book responsible for artificial intelligence, which was published last year, said that although this type of activity of Grok may not be surprising, it emphasizes the existence of a basic defect in artificial intelligence models, said Olivia Gamblin, the ethics of Amnesty International and the author of the book responsible for artificial intelligence, which was published last year.

“It appears that it is possible, at least with GROK models, to set these basic models for general purposes on the will.”

– Laura Culodni of CNBC and Salvador Rodriguez contributed to this report

He watches: Elon Musk’s Xai Chatbot Grok offers “white genocide” claims in South Africa.

Elon Musk's Xai Chatbot Grok offers claims



https://image.cnbcfm.com/api/v1/image/108016215-1722867437958-gettyimages-1768220555-AA_07112023_1410544.jpeg?v=1747420824&w=1920&h=1080

Source link

Leave a Comment