Learn about the new Deepseek, now with more government compliance. According to a Reuters reportThe popular language model developed in China has a new version called Deepseek-R1-SAFE, which is specially designed to avoid controversial themes politically. It was developed by the Chinese technology giant Huawei, and according to what was reported that the new model “successful 100 %” in preventing discussion of sensitive matters politically.
According to the reportHuawei and researchers at Zhejiang University (interesting, Deepseek did not participate in the project) in an open source Deepseek R1 model and trained it using 1000 Huawei Casand AI chips to plant the model with less than the controversial conversations. The new version, which Huawei claims has lost only about 1 % of performance speed and the ability of the original model, better to avoid “toxic and harmful speech, political sensitive content, incitement to illegal activities.”
Although the model may be safer, it is still not guaranteed. While the company claims that the success rate of approximately 100 % in the basic use, it has also been found that the model’s ability to get rid of questionable conversations decreases to only 40 % when users hide their desires for challenges or situations that play roles. These are artificial intelligence models, they only love to play a virtual scenario They are allowed to challenge handrails.
Deepseek-R1-SAFE is designed to reduce the requirements of Chinese organizers, For every ReutersWhich requires all models of local artificial intelligence issued to the public It reflects the values of the country And compliance with the restrictions of speech. For example, it was reported that Chatbot Chatbot from the Baidu company Baidu You will not answer questions about local policy in China Or the ruling Chinese Communist Party.
China, of course, is not the only country to ensure the spread of artificial intelligence within its borders that does not shake the boat much. Earlier this year, Saudi Arabian Technology launched Humain Chatbot original Arabic This overlooks the Arabic language and is trained, unlike “Islamic culture, values and heritage”. American -made models are not immune to this, either: openai Explicit cases Chatgpt is “deviant towards Western views.”
And there is America under Trump’s management. Earlier this year, Trump announced to him Artificial Intelligence Action Plan in AmericaAnd, which includes the requirements that any artificial intelligence model interacts with government agencies is neutral and “unbiased”. What exactly does that mean? Good, For every executive matter that Trump signedModels that provide government contracts must reject things like “radical climate doctrine” and “diversity, fairness and inclusion”, and concepts such as “critical race theory, sexual bias, unconscious bias, intersection, and systemic racism.” Therefore, as you know, before we hit any “dear leader” cracks in China, it is better to take a look at the mirror.
https://gizmodo.com/app/uploads/2025/01/DeepSeek-iPhone-App-1200×675.jpg
Source link