A new study showed that GPT-4 reliably wins discussions against its human counterparts in individual talks-and that technology has become more convincing when you know your age, work and political tendencies.
The researchers at EPFL in Switzerland, the University of Princeton, and Bonno Bono Kisler in Italy have paired 900 participants in the study either a partner in human or GPT-4 from Openai, a large language model (LLM) that often produces text responses to human claims. In some cases, the participants (both of the machine and humans) managed to access the basic demographic information of their counterparts, including gender, age, education, employment, race and political affiliation.
Team Research –Published Today in nature, human behavior – artificial intelligence was 64.4 % more convincing than human opponents when giving them that personal information; Without personal data, artificial intelligence performance was unable to distinguish from human tenders.
“In recent decades, the publication of social media and other online platforms has expanded the capabilities of collective persuasion by enabling allocation or” careful consumption ” – allocating messages to an individual or group to enhance their persuasion,” the team wrote.
When GPT-4 was allowed to customize her arguments, he became very persuasive than any human being-as it has increased the changing person’s mind by 81.2 % compared to human and human discussions. More importantly, the human tender did not become very convincing when they are given access to that personal information.
“In the context of persuasion, experts expressed widely their concerns about the risk of using LLMS to manipulate online and pollute the ecosystem of information by spreading wrong information, exacerbating political polarization, enhancing echo rooms and persuading individuals to adopt new beliefs,” the researchers added.
The researchers have found that GPT-4 could argue with you, and give a set of facts about you, may surpass you to convince you of changing your point of view. The team notes in discussing the paper that LLMS has been criticized before to generate and publish hate speech, misleading, and advertising; Widely, LLMS can be harnessed with user’s personal information for harmful purposes.
The pairs of the team’s research well with the last chatgpt update allows the model Remember more From user conversations (with their permission), this means that artificial intelligence can get a catalog for information about its users.
But there is also good news – or bad news – guidance on how you see it. GPT-4 was very effective in persuading its opponents of less controversial issues, but with more firm situations (referred to in researching the name “opinion power”), the robot had a more difficult time to persuade humans to change their opinion. In other words, there is no indication that the GPT-4 will be more successful than you are in the Thanksgiving schedule.
What’s more, researchers found that GPT-4 tends to use more logical and analytical language, while human critics relied more on personal pronouns and emotional calls. Surprisingly, the customization did not significantly change the tone or GPT-4-it made its arguments more targeted.
In three out of four cases, human participants can correctly identify their opponent as artificial intelligence, which researchers attribute to the distinguished writing method of GPT-4. But the participants had difficulty identifying human opponents as a human being. Regardless, people were more They could change their opinion when they thought they were arguing with artificial intelligence more than they believed that their opponent was human.
The team behind the study says that this experience should serve as “evidence of the concept” of what can happen on platforms such as Reddit, Facebook or X, where controversial discussions and topics are routine – and robots are A very firm presence. The modern paper shows that it does not take stereotypes at the Cambridge Analytica level for AI to change human minds, which machinery with only six types of personal information.
Since people are increasingly dependent on LLMS for assistance in tasks, homework, documents, and even treatment, it is important for human users to remain warning about the information they feed. It is still a paradox that social media – which has been announced as a connective tissue of the digital era – raises the feeling of loneliness and isolation, as two studies on Chatbots Find In March.
So, even if you find yourself in a discussion with LLM, ask yourself: What is the goal of discussing such a complex humanitarian issue with a machine? What do we lose when we hand over the art of persuasion to algorithms? The debate is not only about winning an argument – it is an ideal human thing. There is a reason to search for real conversations, especially individually: building personal contacts and finding a common ground, which is unable to machines, with all powerful learning tools, able to do so.
https://gizmodo.com/app/uploads/2025/05/OpenAI-ChatGPT.jpg
Source link