51

cross-posted from: https://piefed.world/post/374427

paywall bypass: https://archive.is/whVMI

the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

article text:

AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study

By Harry Black

August 12, 2025 at 10:30 PM UTC

Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.

The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.

They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.

Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.

What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.

“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.

A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.

you are viewing a single comment's thread
view the rest of the comments
[-] dat_math@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

What am I missing here?

Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do). It would seem in the "Endoscopist deskilling..." paper, that particular variable was left free (as opposed to being controlled in some way that could be task-relevant or task-irrelevant, to provide a contrast and better understand what's really going on in practitioners' minds).

To elaborate a bit further, when I said,

a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a "machine learning will do it" button

I didn't mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model's output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don't know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It's probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user's competence.

[-] SunsetFruitbat@lemmygrad.ml 1 points 17 hours ago* (last edited 16 hours ago)

Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do).

But were specifically talking about discriminative models and less to do with generative models in this instance. They aren't using generative models to help with this. If you want to talk about generative models with other tasks that actually use generative models, go for it, but it has nothing to do with CADe since CADe doesn't use generative models, but I do understand that your trying to connect these two with how you said they're "forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model"

In which this has less to do with AI but with tools in general does it not, since tools can also cause 'neurological effects'? Lots of other things can fall into this, especially if we leave out the machine learning portion. Like, health professionals tend to use pulse oximeters to also take heart rate besides oxygen levels, while they're busy doing something else like asking questions or whatever, before looking at it and determining whether that seems right or wrong before noting it down. But they are also using a tool that forfeits their attention to solve a problem, because to go to heart rate, they could just easily take it without it but usually they don't, not unless if something warrants further investigation. Except obviously, pulse oximeters aren't generative or discriminative AI, mainly just giving an example where attention is "forfeited".

Either way, it's just seems disingenuous. Since this is just focusing on a superficial similarity with "practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it" or equating the two as if the two are the same. Both have a distinctive purpose. Sure they're the same if were focusing on the overall form or general appearance of these tools since they are both resultant from machine learning, then sure they are the same, but on their content or essence, there is clear difference in what they're built to do. They function different and do different things, otherwise there would be no such distinctions between discriminative and generative, and only does that happen if we go by that superficial similarity like taking attention away, which also shares other things in common with other tools as well that isn't just discriminative or generative ai.

I didn’t mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model’s output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don’t know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It’s probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user’s competence.

I have to ask but what "both regions"? This is all happening on a live camera feed, there is only one region, that of the camera feed or rather, the person's insides that they're investigating, is the only region. If CADe sees something it thinks it's a polyp and highlights it on the camera feed for the health professional to look further, there isn't any other region. CADe isn't producing a different model of someones insides and putting that on the camera feed. However It is producing a rectangle just to highlight something, but it still all falls within the region of the camera feed of someones insides. Either way a health professional still has to look further inside someone and investigate.

I can see the argument of like, someone performing a colonscopy with CADe, mainly relying on this program to highlight things while just passively looking around, but that is rather speaking of the individual than the AI and that amounts more to negligence. Since another thing to note is that there are also false positives as well, so even if someone just relying on it to highlight something, they still have to investigate further instead of just taking it at face value. Which still requires competency like to determine if that mass is a polyp or something normal. However another thing but like, nothing stopping a health professional without CADe being negligent as well. Like missing more subtle polyps because they didn't immediately recognized it since it's not what they're used to seeing and moving on, or alternatively, at first glance seeing what they think is a polyp, but then it turns out it not after further investigation, or if their being neglect, deciding it just needs to go without further investigation despite it not being a polyp but calling a polyp at the end of the day.

Either way, I don't really see what CADe changes much besides just trying to get health professionals to further investigate something. The only thing that I can see this being an issue in regards to CADe, is if they couldn't access the service due to technical issues or other reasons, then yes, deskilling in this instance would be a problem, but on other hand they could just reschedule if it's a tech issue or just wait until it's back up, but that still sucks for the patient considering the preparation for such exams. However that's only an issue if over reliance is an issue, which I can also see maybe over reliance being a problem to an extent, but at the same time, CADe makes health professionals investigate further since it not solving anything, and just because they use it doesn't mean their competency is all thrown out the window.

Besides those things can likely be addressed in different ways to lower deskilling while still using CADe. Since CADe in general does seem to be helpful to an extent, since it's just another tool in the tool box, and a lot of this criticisms like with attention or deskilling, just falls outside of AI and into more general stuff that isn't unique to AI, but tools and technology in general.

this post was submitted on 13 Aug 2025
51 points (100.0% liked)

technology

23912 readers
219 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS