Artificial intelligence may be a promising way to enhance productivity in the workplace, but relying on very serious technology may prevent professionals from maintaining their sharp skills. More specifically, it seems that Amnesty International may increase some doctors in discovering violations during routine tests, and finds new research, raising concerns about technology dependence.
A Ticket It was published in the Lancet Gastroenterology & Hepatology Journal this month that in 1443 patients who underwent colonoscopy with and without systems with AI, the endoscopic scientists who have provided an AI assistant system have been accessed to AI tools that were discovered by 20 %.
Doctors’ failure to discover the largest number of benign tumors on the colon when they no longer used artificial intelligence a surprise to Dr. Marin Rumick, the world of the digestive system in HT. The Medical Center in Tishi, Poland, and the author of the study. The results are only questioning skepticism about the development of potential laziness as a result of excessive dependence on artificial intelligence, but also the changing relationship between medical practitioners and long traditions of corporate training.
“We learned medicine from books and our mentors. We were watching them. They were telling us what to do,” said Romańczyk. “Now there are some artificial things that suggest what we should do, where we must look, and in fact we do not know how to act in this particular situation.”
Beyond increased use of artificial intelligence In operating rooms and doctors’ officesBringing the proliferation of automation in the workplace with it as hopes of enhancing the performance of the workplace. Goldman Sachs expected last year that technology can Productivity increased by 25 %. However, emerging research has also warned of anti -artificial intelligence tools without considering its negative effects. A Ticket from Microsoft Carnegie Mellon University found earlier this year that among the knowledge workers surveyed, artificial intelligence increased the efficiency of work, but it reduced the critical participation with the content and harmful government skills.
The ROMAńCZYK study in this increasing collection of research that doubts the ability of humans to use artificial intelligence without prejudice to their own skills. In his study, artificial intelligence systems helped identify benign tumors on the colon by placing a green box around the area where anomalies will be. Romańczyk is certainly and his team measured the reason for the behavior of endoscopic scientists in this way because they did not expect this result and therefore did not collect data about the reason for this.
Instead, Romańczyk predicts that endoscopic scientists have become so used to searching for the green box that when technology was no longer there, specialists did not have this sign of attention to certain areas. He called this “”Google The effect of maps, “his research results are similar to the changes made by the drivers that transmit me from the era of paper maps to the GPS: Many people are now dependent on automation to show the most efficient way, when 20 years ago, one had to discover this way for themselves.
Checks and balances on artificial intelligence
The consequences of life to automation of the Human Critical Skills Agreement are already rooted.
In 2009, Air France Flight 447 fell on its way from Rio De Janeiro to Paris in the Atlantic Ocean, killing all 228 passengers and airline members on board. and investigation I found that the plane’s automated pilot had been separated, and the ice crystals disrupted its air speed sensors, and the Automatic Director of the plane was providing inaccurate information. However, the flight employees were not effectively trained in how to manually fly in these circumstances and take the wrong trends of the automated flight instead of making appropriate corrections. Air France accident is One of several Humans were not trained in property, instead on the advantages of automated aircraft.
We see a position in which we have pilots who cannot understand what the plane does unless the computer does not explain to them. ” Air France Investigation. “This is not a unique problem Airbus Or unique in the air France. It is a new challenge to training facing the entire industry. “
These incidents bring periods of calculation, especially for the critical sectors in which human life is at stake, according to Lynn Wu, a professor of participation, information and decisions at the Warton School at the University of Pennsylvania. She said that although industries must tend to technology, responsibility for this to ensure that humans adopt them appropriately should be in institutions.
“The important thing is that we learn from this date of flying and the previous generation of automation, that artificial intelligence can enhance performance.” luck. “But at the same time, we have to maintain these critical skills, so that when artificial intelligence does not work, we know how to seize them.”
Likewise, romańczyk does not avoid the presence of artificial intelligence in medicine.
He said: “From artificial intelligence, or is part of our lives, whether we like it or not.” “We do not try to say that artificial intelligence is bad and (stop using it). Instead, we say that we must all try to investigate what is happening within our brains, how do we be affected by it? How can we actually use it?”
Wu said: If professionals and specialists want to continue using automation to enhance their work, they keep them to keep a set of critical skills. Artificial intelligence depends on the human data to train itself, and this means whether its training is flawed, so it will be a result.
“Once we become very bad in it, artificial intelligence will become very bad,” said Wu. “We must be better for Amnesty International to be better.”
https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-73773203.jpg?resize=1200,600
Source link