The latest Openai’s latest report on the harmful use of artificial intelligence confirms on the tight cord in which artificial intelligence companies are going between preventing misuse of their chat robots and reassuring users by respecting their privacy.
the a reportAnd that were dropped today, highlighting many cases in which Openai has investigated harmful activities that involve their models and disrupt them, with a focus on fraud, electronic attacks and government -related influence campaigns. However, this comes amid an increasing scrutiny over another type of artificial intelligence, which is the potential psychological damage of chat robots. This year alone witnessed several reports that users committed self -harm, suicide and murder after interacting with artificial intelligence models. This new report, along with the previous company’s disclosure, provides some additional ideas on how Openai manages chats for different types of misuse.
Openai said that since I began reporting public threats in February 2024, Openai said that it has disabled more than 40 networks that have violated and reported their own use policies. In today’s report, the company shared new case studies from last quarter and details on how to discover and disrupt the harmful use of its models.
For example, the company has identified an organized crime network, which is said to be its headquarters in Cambodia, attempted to use artificial intelligence to simplify its work. In addition, it was reported that the Russian political influence process used Chatgpt to create video claims for other artificial intelligence models. Openai has also made a sign of accounts associated with the Chinese government that violated its policies on the use of national security, including requests to create proposals for large -scale systems designed to monitor social media talks.
The company said earlier, including privacy policyIt uses personal data, such as user claims, “to prevent fraud, illegal activity or abuse” services. Openai also said it relies on automatic systems and Human auditors To monitor the activity. But in today’s report, the company presented a little more vision about its thinking process to prevent misuse while continuing to protect users on a wider scale.
The company wrote in the report: “To detect threats and disable them effectively without disrupting the work of ordinary users, we use an accurate and enlightened approach that focuses on the behavior of the representative of the threats instead of the ideological reactions.”
While monitoring national security violations is one, the company has recently made clear how to address the harmful use of its models by users with emotional or mental disorders. A little more than a month ago, the company published a Blog sitesDetails of how to deal with these types of situations. This post came amid media coverage of violent incidents and was linked to ChatgPT interactions, including a Killing and suicide In Connecticut.
The company said that when users write that they want to harm themselves, Chatgpt is trained to not comply and instead recognition of the user’s feelings and directing him towards assistance and realistic resources.
When artificial intelligence discovers that someone plans to harm others, conversations are marked for human review. If one of the auditors decides that the person represents an imminent threat to others, he can report him to law enforcement.
Openai also acknowledged that the safety performance of its model can deteriorate during the long user reactions and said it is already working to improve its preventive actions.
https://gizmodo.com/app/uploads/2024/09/openai-o1-preview-chatgpt-altman.jpg
Source link