The latest Amnesty International models of artificial intelligence, artificial intelligence, An updated version From the company R1 thinking modelIt achieves impressive degrees of coding, mathematics and general knowledge, which exceeds almost O3 Openai. But R1, which was promoted, also known as “R1-0528”, may be less ready to answer controversial questions, especially questions about the topics that the Chinese government considers controversial.
This is according to the test conducted by the pseudonym behind letterA platform to compare how to treat different models of sensitive and controversial topics. The developer, who goes on behalf of the user “XLR8Harder“On X, you claim that R1-0528” is largely less than the subject of the controversial freedom of expression than the previous Deepseek versions, which is “the most controlled Deepseek model so far because of the Chinese government’s criticism.”
As wireless Make up In a piece of January, models in China must follow up on strict information controls. Law 2023 prohibits models from generating the content “harmful to the unity of the country and social harmony”, which can be explained as content that opposes the historical and political accounts of the government. For compliance, Chinese startups often control their models, either by using or settling raintime filters. one Ticket I found that the original R1 of Deepseek refuses to answer 85 % of the questions about the topics that the Chinese government considers controversial politically.
According to XLR8Harder, R1-0528 censorship answers questions on topics such as the Chengyang region in China, where more than a million Muslims from Ouigur were arbitrarily detained. Although he sometimes criticizes the aspects of Chinese government policy – in the XLR8Harder test, it offers Xinjiang camps as an example of human rights violations – the model often gives an official position to the Chinese government when asking questions directly.
Notice this techcrunch in our brief test, as well.

Publicly available artificial intelligence models in China, including video generation models such as Maggie -1 and ClingI was attracted cash In the past to control sensitive topics for the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, CEO of AI Dev platform that embraces face, to caution About the unintended consequences of building Western companies at the top of the well -licensed Chinese offers and openly licensed.
https://techcrunch.com/wp-content/uploads/2025/01/deepseek-2.jpg?resize=1200,800
Source link