Experts warn that artificial intelligence could reshape teenagers’ brains

Photo of author

By [email protected]


nanostock | iStock | Getty Images

Artificial intelligence is reshaping the workplace, and is increasingly finding its way into the hands of many teens and children.

From help with homework to chatting with AI “friends,” tools like ChatGPT have free online versions that are easier for younger users to access. These AI-powered chatbots, Built on large language models (LLM), it generates human-like responses that have raised concern among parents, teachers, and researchers.

A 2024 reconnaissance A Pew Research Center study found that 26% of American teens ages 13 to 17 say they used ChatGPT for their schoolwork, double the rate compared to the previous year. Chatbot awareness rose to 79% in 2024 from 67% in 2023.

The organizers have taken note. In September, the F.T.C He ordered seven companiesincluding OpenAI, Alphabet, and Meta, to explain how AI-powered chatbots impact children and teens.

In response to increased scrutiny, OpenAI Announce In the same month it will launch a custom ChatGPT experience Parental controls For users under 18 years of age and developing tools to better predict a user’s age. The system will automatically direct minors to the company’s “ChatGPT experience with age-appropriate policies.” He said.

The dangers of children using artificial intelligence-supported chatbots

However, some experts worry that early exposure to artificial intelligence — especially as today’s younger generations grow up with the technology — may negatively impact how children and teens think and learn.

Preliminary 2025 study from researchers at Media Lab at MIT Examining the cognitive cost of using LLM in essay writing. 54 participants between the ages of 18 and 39 were asked to write an essay, and were divided into three groups: one could use an AI chatbot, another could use a search engine, and a third could rely solely on their own knowledge.

The convenience of getting this tool today will have a cost later and will most likely accumulate.

Natalia Kuzmina

Research scientist, Massachusetts Institute of Technology

The study, which is still in the process of being peer-reviewed, found that brain connectivity “systematically decreases with the amount of external support,” according to He studies.

“The brain-only group showed the strongest and broadest networks, the search engine group showed moderate interaction, and LLM assistance elicited the weakest overall (neural) coupling,” according to the study.

Ultimately, the study suggests that reliance on AI chatbots may lead people to feel less ownership of their work and lead to “cognitive debt,” a pattern of putting off mental effort in the short-term that may erode creativity or make users more susceptible to manipulation in the long-term.

“The ease of obtaining this tool today will have a cost later, which will most likely accrue,” said research scientist Natalia Kosmina, who led MIT’s Media Lab. He studies. She added that the findings also suggest that relying on LLM holders may lead to “important problems related to critical thinking.”

Children, in particular, could be at risk of some negative cognitive and developmental effects from using AI chatbots very early on. To help mitigate these risks, researchers agree that it is very important for anyone, especially young people, to have the skills and knowledge first before relying on AI tools to complete tasks.

“Develop a skill for yourself (first), even if you don’t become an expert at it,” Kosmina said.

Doing so will allow inconsistencies and AI hallucinations – a phenomenon where inaccurate or fabricated information is presented as facts – to be more easily exposed, which will also help “support the development of critical thinking”, she added.

“For younger children… I think it’s very important to limit the use of generative AI, because they really need more opportunities to think critically and independently,” said Billeong Kim, a professor at the University of Denver and an expert in child psychology.

There too Privacy risks Kozmina explained that children may not realize this, and it is important when using these tools that they are used in a responsible and safe manner. “We need to teach in general, not just AI literacy, but (also) computer literacy,” she said. “You really need clear technical cleanliness.”

Children also have a greater tendency to anthropomorphize, or attribute human characteristics or behavior to non-human entities, Kim said.

“We now have these machines that talk just like a human,” Kim said, which can put children in vulnerable situations. “Simple praise from these social robots can really change their behavior,” she added.

Protecting children in the age of artificial intelligence



https://image.cnbcfm.com/api/v1/image/108209146-1759891694745-gettyimages-2166847868-66bb839706b4320c5139553f.jpeg?v=1759891722&w=1920&h=1080

Source link

Leave a Comment