Your mind works differently when you use AI Tolide To complete the task of what you do when you use your mind on your own. It is unlikely to remember what she did. This is a somewhat correct conclusion to study the Massachusetts Institute of Technology that looked at how people think when they write an article-one of the oldest scientific studies on how the use of Gen AI affects us.
the TicketPreviously, he has not yet been reviewed from peers, he is very small (54 participants) and first, but it indicates the need for more research on how to use tools like Openai’s Chatgpt It affects how our brains work. Openai did not immediately respond to a request to comment on the search (disclosure: Zif Davis, the parent company CNET, in April, filed a lawsuit against Openai, claiming that it had violated Zif Davis Copyrights in training and operating its AI systems.)
The results show a big difference in what happens in your mind and with your memory when completing a task using the artificial intelligence tool instead of doing this with your mind only. But do not read much in these differences-this is just a glimpse of brain activity at the present time, and not a long-term evidence of changes in how your mind works all the time.
“We want to try to provide some of the first steps in this direction and also encourage others to ask the question,” Natalia KosamenaI told me a research scientist at the Massachusetts Institute of Technology and the main author of the study.
The growth of artificial intelligence tools such as Chatbots changes How do we workand Search for information and He writes. All this happened so quickly that it is easy to forget that Chatgpt appeared for the first time as a common tool just a few years ago, at the end of 2022. This means that we have now started seeing the research on how AI influenced us.
Below is a look at what the Massachusetts Institute of Technology has found about what happened in the brains of Chatgpt users, and what future studies might tell us.
This is your mind on ChatGPT
The researchers at the Massachusetts Institute of Technology divided the 54 participants in research into three groups and asked them to write articles during separate sessions over several weeks. One group was granted access to Chatgpt, another group was allowed to use a standard search engine (Google), and the third did not have any of these tools, only their own brains. The researchers analyzed the texts they produced, conducted an interview with the topics immediately after they wrote articles, and they recorded the brain activity of the participants using EEG electrical layout.
An analysis of the language used in the articles found that those in the “brain only” group wrote in more distinctive ways, while those who used large language models produced somewhat similar articles. The most interesting results came from the interviews after writing the articles. Those who used their brains on their own showed a better call and were more able to quote their writings than those who used search engines or LLMS.
Read more: Artificial Intelligence Basics: 29 ways to make Gen AI work for you, according to our experts
It may be surprising that those who have relied heavily on LLMS, who may have copied and pasted from Chatbot’s responses are less able to quote than they “wrote”. Cosmene said that these interviews were conducted immediately after writing occurred, and the deficiency of the summons is noticeable. “I wrote it, right?” She said. “Do you not be supposed to know what it was?”
EEG results also showed major differences between the three groups. There was a nervous connection-the interaction between the components of the brain-between the participants in the brain only more than the search engine group, and the LLM group had the lowest activity. Again, not a completely sudden conclusion. Using tools means that you use less than your mind to complete the task. But Kosmena said that the research helped to show what the differences: “The idea was to look closer to understanding that it is different, but how does it differ?” She said.
The authors of the study wrote that the LLM group showed “the effects of the weaker memory, reduce self -control and the authorized author.” This can be a source of anxiety in an educational environment: “If users are heavily dependent on artificial intelligence tools, they may achieve superficial fluency, but they fail to absorb knowledge or feel ownership.”
After the first three articles, the participating researchers called for the fourth session in which they were appointed to a different group. The results there, from a much smaller group of topics (only 18), found that those who were in the brain group only initially showed more activity even when using LLM, while only those in the LLM group showed a lower nervous connection without LLM from the initial group only.
This is not “virgin”
When a study of the Massachusetts Institute of Technology was released, many major addresses claimed that it showed that the use of Chatgpt was “rot” brains or caused major problems in the long run. This is not exactly what the researchers found. The study focused on the brain activity that occurred during the work of the participants – their internal brain circles at the present time. He also examined their memory about their work at that moment.
Understanding the long -term effects of using artificial intelligence requires long -term study and different methods. Kosmena said future research can look at other Gen AI use, such as coding, or the use of technology that examines different parts of the brain, such as functional magnetic resonance imaging, or functional magnetic resonance. “The whole idea is to encourage more experiences and collect scientific data,” she said.
“The use of LLMS is still researching, it is also possible that the influence on our brains is unimportant as you think. It studies how genetics and biology helps in developing and building the brain – which occurs early in life. She said that these critical periods tend to close during childhood or adolescence.
Stein Operation told me: “All of this happens before you interact with ChatGPT or anything like that.” “There is a lot of infrastructure prepared, and this is very strong.”
Stein Operation said the situation may be different in children, who are increasingly contact with artificial intelligence technology, although the study of children raises moral concerns for scientists who want to search for human behavior.
You can get Chatbot to help you write an article, but do you remember what you write?
Why do you care about writing articles anyway?
The idea of studying the effect of artificial intelligence on writing articles may seem useless to some. After all, was it not the goal of writing an article at school for a degree? Why not use external sources that work on a machine that can do this, if not better, then more easily?
The Massachusetts Institute of Technology Study reaches the important point: writing an article on the development of your thinking, about the understanding of the world around you.
“We start with what we know when we start writing, but in the act of writing, we end up framing the following questions and we are thinking about new ideas or new content to explore it,” said Robert Kamings, a professor of writing and rhetoric at Mississippi University.
Cummings have had a similar research on how computer technologies affect how we were writing. One study The completion of the involved sentence – what you may know informally in the name of automatic completion. 119 books took them and they were written by writing an article. Half of nearly half of the computers with Google Smart Compose could have been, while the rest did not. Did you make the book faster, or have they spent more time and writing less because they had to move in the proposed options? The result was that they wrote the same amount in the same time period. He told me: “They did not write with different sentences, with different levels of complexity of ideas.” “He was equal.”
Chatgpt and ILK have a different monster. With a sentence completion technique, you still have words control, you still have to take writing options. In the study of the Massachusetts Institute of Technology, some participants copied and paste what Shatt said. They may not have read the work they did as their own.
“My personal opinion is that when students use Amnesty International to replace their writings, they are a kind of surrender, they did not participate actively in their project anymore.”
Researchers at the Massachusetts Institute of Technology found something interesting in that fourth session, when they noticed that the group that wrote three articles without tools had higher levels of participation when tools were finally provided.
They wrote: “These combined results support an educational model that delays the integration of artificial intelligence so that learners participate in a self -perceived effort.” “Such an approach may enhance both the effectiveness of instant tools and permanent cognitive independence.”
Kamings said he began teaching his composition chapter without devices. Students write in the classroom, in general, on the most personal topics and it will be difficult to feed in LLM. He said he does not feel that he is leading papers written by Amnesty International, and that his students get an opportunity to interact with their own ideas before asking for help from a tool. “I will not return,” he said.
https://www.cnet.com/a/img/resize/f0c1893527c4f6297fb58d505110eae4d5807ed7/hub/2025/07/01/58eda740-5737-4cbf-9511-91236ef7cb36/gettyimages-1466716029.jpg?auto=webp&fit=crop&height=675&width=1200
Source link