51

cross-posted from: https://piefed.world/post/374427

paywall bypass: https://archive.is/whVMI

the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

article text:

AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study

By Harry Black

August 12, 2025 at 10:30 PM UTC

Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.

The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.

They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.

Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.

What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.

“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.

A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.

you are viewing a single comment's thread
view the rest of the comments
[-] SunsetFruitbat@lemmygrad.ml 1 points 2 days ago* (last edited 1 day ago)

I think it does since to go to this AI in question, it is just simply image recognition software. I don’t exactly see how it affects cognitive abilities. I can see how it can affect skills, sure, and perhaps there could be an over reliance on it! but for cognitive abilities themselves, I don't see it. Something else to, it’s important to note that this is in reference to non AI assisted, and it doesn’t necessarily mean it’s bad. Like to go to the news article under this all.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

and to go back to that one comments from Omar article

A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

It could be argued that AI helped more. However I think a few better questions is, if AI is helping health professional detect things more, what is the advantage of going back to non ai assisted then? Why should non ai assisted be preferred if ai assisted is helping more? Is this really a problem, and what could help if this is a problem? I think it is clear that it does help to an extent, so just getting rid of it doesn't seem like a solution, but I don't know! I'm not a health professional who works with this stuff or is involved in this work.

There is this video that covers more about CADe here https://www.youtube.com/watch?v=n9Gd8wK04k0 titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES

I just genuinely don't see what is wrong with CADe especially if it helps health professionals catches things that they might of missed to begin with, and like again CADe is simply just highlight things for health professionals to investigate further, how is there something wrong with that?

To add, just because something being mentally outsourced doesn’t necessarily mean that's bad. I don’t think google maps killed people ability to navigate, it just simply made it easier no? Should we just go back to compass and maps? Or even further, just go by navigation by the stars and bust out our sextants? Besides, mentally offloading can be good and allowing us to free up to do more, it just depends on what or what the end goal is? I don't necessarily see what is wrong with mentally offloading things.

I also don't understand your example with getting run over? I wouldn't want to get hit by any vehicles, since both can kill or cause life long injuries.

I’m not going to go into those other articles much since it’s veering into another topic, but I do understand LLM have a tendency to cause people to become over reliant on it or take it at face value, but I don’t agree with any notion that it's doing things, like making things worse or causing something as serious as cognitive impairment since that is a very big claim and like millions and millions of people are using this stuff. I do think however, there should be more public education on these things like using LLM right and not taking everything it generates at face value. To add, with a lot of those studies I would be interested to what studies are coming out of China to, since they also have this stuff to

What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.

Somehow I was suppose to get that from this?

Chronic AI use has functionally the same effects as deep frying your brain

That's a bit unfair to assume that I'm somehow suppose to get that just based off that single sentence, because what you said a lot different than the other, and with that added context, forgive me! there nothing wrong with that! with that added context.

cw: ableism

It just, I really don’t like it when criticism of AI stem into pretty much people saying others are getting "stupid", since besides the ableism there, millions of people use this stuff, and it also just reeks of like how to word this. “Everyone around me is a sheep and I'm the only enlighten one” kind of stuff, especially since people aren’t stupid, nor are the people who use any of this stuff either, and I just dislike this framing especially when it framed as this stuff causing "brain damage", when it's not and your comment without that added context felt like it was saying that.

this post was submitted on 13 Aug 2025
51 points (100.0% liked)

technology

23912 readers
212 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS