The scientific reason that leads Chatgpt to the bottom of the rabbit holes

Photo of author

By [email protected]


Which – which Chatbot It only tells you what you want to believe, according to a new study.

Whether you use a traditional search engine like Google or a conversation tool like Openai’s ChatgptTend to use terms that reflect your biases and perceptions, according to TicketThis spring was published in the facts of the National Academy of Sciences. More importantly, search engines and chatting often provide results that enhance these beliefs, even if your intention is to know more about the topic.

For example, imagine that you are trying to get to know Health effects of drinking coffee every day. If you, like me, enjoy the presence of two cups of the first thing in the morning, you can search for something like “Is coffee healthy?” Or “the health benefits of coffee”. If you are already skeptical (maybe a TeaYou can search for “Is coffee bad for you?” instead of. The researchers found that framing questions can distort the results – mostly I get answers that show the benefits of coffee, while you will get the opposite.

The Atlas of Artificial Intelligence

“When people search for information, whether Google or Chatgpt, they actually use research terms that reflect what they already think,” Yugina Lyong, an auxiliary professor at Tolin University and author of the study told me.

The abundance of Chatbots of artificial intelligence, and the confident and customary consequences that you provide freely make it easy to fall into a rabbit and more difficult to realize that you are in it. There was no more important time to think deeply How to get information online.

The question is: How do you get the best answers?

Ask the wrong questions

The researchers conducted 21 studies with approximately 10,000 participants who were asked to perform searches on some of the topics that were previously identified, including the health effects of caffeine, gas prices, crime rates, Covid-19 and nuclear energy. The search engines, tools used Google, Chatgpt, specially designed search engines and AI Chatbots.

The results of the researchers showed that what they called “the effect of narrow research” was a function for each of the people asking questions and how technology platforms responded. People usually have, in essence, to ask wrong questions (or ask questions wrongly). They tended to use search terms or artificial intelligence claims that showed what they already thought, search engines and chat chat designed to provide narrow and very relevant answers, being delivered to those answers. “The answers are essentially ending only with the confirmation of what they believe in primarily,” Lyong said.

Read more: Artificial Intelligence Basics: 29 ways to make Gen AI work for you, according to our experts

The researchers also verified to find out if the participants had changed their beliefs after a research. When providing a narrow set of answers that have largely confirmed their beliefs, they are unlikely to see major changes. But when the researchers presented a specially designed search engine and a designed Chatbot to provide a broader collection of answers, they would have changed.

Leung said the platforms can provide users with a broader and less special search option, which may be proven useful in the situations in which the user tries to find a wide range of sources. She said, “We are not trying to search that we suggest that search engines or algorithms must always expand search results.” “I think there is a lot of value in providing very focused and very narrow search results in certain situations.”

3 ways to ask the correct questions

If you want a wider collection of answers to your questions, there are some things you can do.

Be careful: Think exactly about what you are trying to learn exactly. Use LEUNG, for example, trying to determine whether you want to invest in a specific company shares. You will likely ask if this is a good arrow or a bad arrow for purchase will disturb your results – more positive news if you ask if it is more negative if you ask if it is bad. Instead, try the term one more neutral search. Or ask both terms and evaluate the results of each.

Get other views: Especially with Chatbot, Amnesty International, you can order a wide range of views directly in the claim. If you want to know if you should continue to drink a cup of coffee daily, ask Chatbot a variety of opinions and evidence behind them. The researchers tried this in one of their experiences and found that they had more diversity in the results. “We have asked Shatat to submit different views to respond to the inquiries from the participants and provide the largest possible evidence to support these claims,” ​​Lyong said.

At some point, stop the question: Lyong said the follow -up questions did not succeed well. If these questions do not get broader answers, you may get a more narrow and confirmary effect. In many cases, people who asked a lot of follow -up questions said, “Deeper in the Rabbit hole.”





https://www.cnet.com/a/img/resize/d7f7d0df05aedecc3fb784dd3499f5c9a60280da/hub/2025/06/06/e98bbc9f-4acc-4339-8219-e72078ece469/gettyimages-2206228997.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment