Xai blames a “unauthorized modification” of a mistake in Chatbot Grok, which caused the Grok, which caused Grok. It indicates over and over again The “white genocide in South Africa” when it is invoked in certain contexts on X.
On Wednesday, Grok began responding to dozens of publications on X with information about the white genocide in South Africa, until in response to unrelated topics. The strange responses from the X Grok account stem, which responds to the users created by AI whenever a person does a “@Grok” brand.
According to the participation of Thursday from the official Xai account for Xai, a change was made on Wednesday morning to the Grok Bot-high-level instructor that directs robots-which Grok was directed to provide a “specific response” about “political topic”. Xai says that the disk “violates () its internal policies and basic values,” and that the company “conducted a comprehensive investigation.”
This is the second time that Xai has publicly admitted that an unauthorized change in Grok symbol causes artificial intelligence in controversial ways.
In February, Green I was subjected to control for a short time Donald Trump and Elon Musk, the founder of the billionaire XAI and the owner of Xai. Igor Babuschkin, which provides Xai engineering, said that Grok has obtained instructions from a Rogue To ignore the sources mentioned by Musk or Trump that publishes the wrong information, and that Xai returned the change as soon as users began referring to it.
Xai said on Thursday that he would make several changes to prevent similar incidents in the future.
Starting today, Xai will post Grok on GitHub as well as ChangeLog. The company says it “will also set additional tests and standards” to ensure that Xai staff is unable to adjust the system’s router without reviewing and creating “a monitoring team around the clock throughout the week to respond to accidents with Grok answers that are not discovered by automatic systems.”
Despite the frequent Mousse warnings of the risks Amnesty International gold undefinedXAI has a safety record of artificial intelligence. Modern report I found that Grock would take off the pictures of women when asked. Chatbot can also be More largely From artificial intelligence such as Gueini from Google and ChatGPT, cursing without a major seizure of talking about it.
A study conducted by Saferai, a non -profit organization aimed at improving the accountability of artificial intelligence laboratories, Xai found a bad rating in safety between its peers, because of it “Very weak” risk management practices. Earlier this month, xi It was absent from the deadline that it imposed To spread the safety framework of artificial intelligence.
https://techcrunch.com/wp-content/uploads/2023/11/xAI-Grok-GettyImages-1765893916.jpeg?resize=1200,800
Source link