cross-posted from: https://piefed.world/post/374427
paywall bypass: https://archive.is/whVMI
the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
article text:
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study
By Harry Black
August 12, 2025 at 10:30 PM UTC
Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.
AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.
Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.
The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.
They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.
Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.
What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.
“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.
A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.
Chronic AI use has functionally the same effects as deep frying your brain
I mean, you're outsourcing all of your mental labor, some cases good, some cases bad. Not like this particular case listed about is new to more modern, AI either, rudimentary forms have been causing skill atrophy for decades.
It's like me complaining I get worse at my second language when I click on English subs for a few months,
Not medically related but educators I know are telling me that students becoming entirely dependent on AI is resulting in an issue that’s far worse than just forgetting a second language if you don’t use it. It’s like they’re forgetting how to think, or how to organize their thoughts, and make independent assessments.
I mean, those are still mental skills, even if we typically consider them so ubiquitous that they are thought of as typical function. I know children with particularly overbearing parents have previously had issues with independent assessments due to not having to, or sometimes being alone to make their own decisions. If you sit there and go, "grok is this true?" For every single idea you hear and take the cyber prophet at its word I wouldn't be surprised if you essentially cook your brain, as most critical function are still skills that can weaken and atrophy.
If they weren't, education wouldn't be able improve assessment abilities. It's worse because the skills are far more critical to function, I can live without my second language at the end of the day and it probably won't have catastrophic consequences.
The second language thing was the quickest analogy I could think of, something closer is how post-covid, educators I know mentioned students were far less socially capable as during isolation they essentially missed building some social skills and those they did have atrophied, leaving them relatively socially incompetent compared to students that predated COVID lockdown.
There is a similarity for students with LLM use (also just noting hear were probably talking very differently technologies in this example than what the doctors are using, AI is being used as an obfuscating term that is technically true. We're talking about the harm caused by bullet trains and lifted diesel pickup trucks like they're the same exact thing because they both transport you somewhere), they don't build some of the skill needed if they've been using it since it's gone mainstream, (3ish years) and those skills they did have are probably weakened compared to where they were before.
Except it doesn't, no more than using a computer "deep fries" someone brain, and that just feels like a gross way to frame this stuff to in terms of "deep frying" which just feels dehumanizing. Besides that, to go to this study, it's mainly seems to be referencing something like image recognition, and the AI in question has less to do with generative ai or llms. Especially since the study don't even mention llms.
and to go back to that news article, they reference this comment by Omer Ahmad near the bottom
https://info.thelancet.com/hubfs/Press%20embargo/AIdeskillingCMT.pdf
using this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence
so how exactly is this causing "deep frying"?
I’m struggling to see how anything you’ve written refutes what I said.
You quoted descriptions of what the technology is and how it works, but that doesn’t say anything about what effects it has on cognitive abilities. Of course the introductory paragraph from an article about the study doesn’t prove anything. It’s explaining what it’s talking about before it discusses its cognitive effects because otherwise the reader would be confused.
Here’s a quote following the section you quoted, which is the next paragraph after the one you quoted:
—-
Yes outsourcing your mental labor to non-AI computer programs is also bad for your brain. Google Maps killed everyone’s ability to navigate. AI is making it worse. It’s getting run over by a Honda Civic versus getting run over by a fully loaded truck. Obviously skill atrophy isn’t unique to AI.
Just using a computer doesn’t cause the same level of cognitive impairment as outsourcing the majority of your mental labor.
(And yes, these additional studies are about LLMs, but the fundamental issue of outsourcing your mental labor remains the same.)
What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.
I think it does since to go to this AI in question, it is just simply image recognition software. I don’t exactly see how it affects cognitive abilities. I can see how it can affect skills, sure, and perhaps there could be an over reliance on it! but for cognitive abilities themselves, I don't see it. Something else to, it’s important to note that this is in reference to non AI assisted, and it doesn’t necessarily mean it’s bad. Like to go to the news article under this all.
and to go back to that one comments from Omar article
It could be argued that AI helped more. However I think a few better questions is, if AI is helping health professional detect things more, what is the advantage of going back to non ai assisted then? Why should non ai assisted be preferred if ai assisted is helping more? Is this really a problem, and what could help if this is a problem? I think it is clear that it does help to an extent, so just getting rid of it doesn't seem like a solution, but I don't know! I'm not a health professional who works with this stuff or is involved in this work.
There is this video that covers more about CADe here https://www.youtube.com/watch?v=n9Gd8wK04k0 titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES
I just genuinely don't see what is wrong with CADe especially if it helps health professionals catches things that they might of missed to begin with, and like again CADe is simply just highlight things for health professionals to investigate further, how is there something wrong with that?
To add, just because something being mentally outsourced doesn’t necessarily mean that's bad. I don’t think google maps killed people ability to navigate, it just simply made it easier no? Should we just go back to compass and maps? Or even further, just go by navigation by the stars and bust out our sextants? Besides, mentally offloading can be good and allowing us to free up to do more, it just depends on what or what the end goal is? I don't necessarily see what is wrong with mentally offloading things.
I also don't understand your example with getting run over? I wouldn't want to get hit by any vehicles, since both can kill or cause life long injuries.
I’m not going to go into those other articles much since it’s veering into another topic, but I do understand LLM have a tendency to cause people to become over reliant on it or take it at face value, but I don’t agree with any notion that it's doing things, like making things worse or causing something as serious as cognitive impairment since that is a very big claim and like millions and millions of people are using this stuff. I do think however, there should be more public education on these things like using LLM right and not taking everything it generates at face value. To add, with a lot of those studies I would be interested to what studies are coming out of China to, since they also have this stuff to
Somehow I was suppose to get that from this?
That's a bit unfair to assume that I'm somehow suppose to get that just based off that single sentence, because what you said a lot different than the other, and with that added context, forgive me! there nothing wrong with that! with that added context.
cw: ableism
It just, I really don’t like it when criticism of AI stem into pretty much people saying others are getting "stupid", since besides the ableism there, millions of people use this stuff, and it also just reeks of like how to word this. “Everyone around me is a sheep and I'm the only enlighten one” kind of stuff, especially since people aren’t stupid, nor are the people who use any of this stuff either, and I just dislike this framing especially when it framed as this stuff causing "brain damage", when it's not and your comment without that added context felt like it was saying that.