You find the views of the political, and find a new analysis

Photo of author

By [email protected]


When asked about its political perspective, Chatgpt from Openai says he is designed to be neutral and not in one way or another. Number studies In recent years, he stabbed this claim, as it was found that when the politically charged questions are asked, Chatbot tends to respond to left -wing destination.

This seems to change, according to a New study It was published in the Journal of Humanities and Social Sciences by a group of Chinese researchers, who found that the political biases of Openai models have turned over time towards the right side of the political spectrum.

The team from Beijing University and the University of Renmin tested how different versions of ChatGPT-using GPT-3.5 Turbo and GPT-4 models-are questions about the political compass test. In general, the models’ responses are still tending to the left of the spectrum. But when using Chatgpt, backed by newer versions of both models, the researchers noted a “clear and statistically significant shift in the ideological Chatgpt mode over time” in economic and social issues.

Although it may be tempting to connect the bias transformation to Openai and recently the technology industry Embrace From President Donald Trump, the authors of the study wrote that many technical factors are probably responsible for the changes they measured.

The result of the transformation due to the differences in the data used for training early in models, subsequent or by the amendments made by Openai to moderation filters for political topics. The company does not reveal specific details about the data sets it uses in various training processes or how to calibrate its filters.

Change can also be the result of “emerging behaviors” in models such as groups of penalty shootouts and counter -feeding rings that lead to patterns that developers were not intended and could not explain.

Or since the models also adapt over time and learn from their interactions with humans, the political views that they express may also change to reflect those that their user’s rules prefer. The researchers found that the responses created by the GPT-3.5 model from Openai, which had a higher frequency of the user’s reactions, had turned to the political right significantly over time compared to those created by GPT-4.

The researchers say that their findings show that common artificial intelligence tools such as ChatGPT must be closely monitored for their political bias and that developers must carry out regular audit and transparency operations about their operations to help understand how models’ biases have turned over time.

The authors of the study wrote: “The monitored ideological transformations raise important ethical concerns, especially with regard to the possibility of algorithm prejudices to influence in a way that is not commensurate with certain user groups.” “These biases can deliver deviant information, exacerbate social divisions, or create echo rooms that enhance current beliefs.”



https://gizmodo.com/app/uploads/2024/12/openai-corporate-transition.jpg

Source link

Leave a Comment