51

cross-posted from: https://piefed.world/post/374427

paywall bypass: https://archive.is/whVMI

the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

article text:

AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study

By Harry Black

August 12, 2025 at 10:30 PM UTC

Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.

The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.

They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.

Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.

What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.

“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.

A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.

top 29 comments
sorted by: hot top controversial new old
[-] came_apart_at_Kmart@hexbear.net 35 points 2 days ago

I've been around a few AI Heads and watching them degrade in real time is tragic.

don't think through a problem, develop a strategy for analysis, or even just have a small, casual meeting to talk through the situation with some context... just dump quick thoughts into chatgpt and if it seems ok, pull the trigger and move on.

I'm not even talking about coding, I'm talking about community promotional material development, public project planning and evaluation, and other high level conceptual shit where honestly someone should be accountable in the future to ask some human-ass question like "why did you do it like that?"

this isn't spell/grammar checking or flagging a datasets errors. it's smooshy brain think and feel stuff, and why anyone would let that part of their brain atrophy is beyond me.

[-] SunsetFruitbat@lemmygrad.ml 3 points 2 days ago* (last edited 2 days ago)

But the study isn't even about LLM's? It doesn't really say but this video talks about a kind of AI that used for this stuff https://www.youtube.com/watch?v=mq_g7xezRW8

which.. really isn't llm's? I'm going to assume the AI in question is like in that video? and just reminds me of image recognition software. To add, nowhere in the study even mentions LLM's in question. Rather unrelated but also hearing others talk about someone "brain atrophy" just feels gross since it comes across as dehumanizing.

to go back to that news article, they reference this https://info.thelancet.com/hubfs/Press%20embargo/AIdeskillingCMT.pdf in which they are referring to

Computer-aided polyp detection (CADe) in colonoscopy represents one of the most extensively evaluated uses of AI in medicine, demonstrating clinical efficacy in multiple randomised controlled trials (RCTs)."

and to use this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence

These systems are based on software that, as the colonoscope snakes through the colon, scans the tissue lining it. The CAD software is “trained” on millions of images from colonoscopies, allowing it to potentially recognize concerning changes that might be missed by the human eye. If its algorithm detects tissue, such as a polyp, that it deems suspicious, it lights the area up on a computer screen and makes a sound to alert the colonoscopy team.

so it is image recognition and has nothing to do with generative ai.

[-] dat_math@hexbear.net 2 points 1 day ago* (last edited 1 day ago)

so it is image recognition and has nothing to do with generative ai.

I'm not sure if I'd go so far as to say it has nothing to do with generative AI

in both cases, a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a "machine learning will do it" button

[-] SunsetFruitbat@lemmygrad.ml 1 points 1 day ago* (last edited 1 day ago)

How so? How is it related to generative AI then? It's not generating images or text. Sure it trained like any other machine learning stuff, but from various videos on computer aided polyp detection, it just literally putting green rectangle box over what it thinks it's a polyp for said health professionals looking to investigate or check out. Looking up on other videos on computer aided polyp detection also just show this to. It just literal image recognition and that it. So genuinely, how is that generative "ai" or related to? What images is it generating? Phones and camera also have this function with using image recognition to detect faces and such. Is there something wrong with image recognition? I'm genuinely confused here.

in both cases, a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a “machine learning will do it” button

That is such a bad framing of this, especially if you willing overlook how from even this news article and the study, mentions how CADe has helped detection numbers

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

and to go that one comments from Omar that the news article linked

A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

To add, just watch that video linked or find other videos of computer aided polyp detection, and tell me how a machine is solving the problem? Or this video that covers this more in depth. https://www.youtube.com/watch?v=n9Gd8wK04k0 Titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES

But again, the health professional is looking at a monitor with a camera with a live feed, and all said technology is doing is highlighting things in a green rectangular box for health professionals to investigate further. How is the machine taking their attention away when it is rather telling health professionals "hey might want to take a closer look at this!" and bring their attention to investigate further on something? Seriously I don't understand. How is that bad?

Another thing is just because a health professional is putting their attention on something, doesn't mean they won't miss something, and like again, I don't see what wrong with CADe if it highlights something said health professional might of missed, especially since again, all it is doing is highlighting things on a monitor with camera feed, for a health professional to just further check out.

What am I missing here?

[-] dat_math@hexbear.net 1 points 14 hours ago* (last edited 14 hours ago)

What am I missing here?

Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do). It would seem in the "Endoscopist deskilling..." paper, that particular variable was left free (as opposed to being controlled in some way that could be task-relevant or task-irrelevant, to provide a contrast and better understand what's really going on in practitioners' minds).

To elaborate a bit further, when I said,

a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a "machine learning will do it" button

I didn't mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model's output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don't know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It's probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user's competence.

[-] SunsetFruitbat@lemmygrad.ml 1 points 5 hours ago* (last edited 4 hours ago)

Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do).

But were specifically talking about discriminative models and less to do with generative models in this instance. They aren't using generative models to help with this. If you want to talk about generative models with other tasks that actually use generative models, go for it, but it has nothing to do with CADe since CADe doesn't use generative models, but I do understand that your trying to connect these two with how you said they're "forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model"

In which this has less to do with AI but with tools in general does it not, since tools can also cause 'neurological effects'? Lots of other things can fall into this, especially if we leave out the machine learning portion. Like, health professionals tend to use pulse oximeters to also take heart rate besides oxygen levels, while they're busy doing something else like asking questions or whatever, before looking at it and determining whether that seems right or wrong before noting it down. But they are also using a tool that forfeits their attention to solve a problem, because to go to heart rate, they could just easily take it without it but usually they don't, not unless if something warrants further investigation. Except obviously, pulse oximeters aren't generative or discriminative AI, mainly just giving an example where attention is "forfeited".

Either way, it's just seems disingenuous. Since this is just focusing on a superficial similarity with "practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it" or equating the two as if the two are the same. Both have a distinctive purpose. Sure they're the same if were focusing on the overall form or general appearance of these tools since they are both resultant from machine learning, then sure they are the same, but on their content or essence, there is clear difference in what they're built to do. They function different and do different things, otherwise there would be no such distinctions between discriminative and generative, and only does that happen if we go by that superficial similarity like taking attention away, which also shares other things in common with other tools as well that isn't just discriminative or generative ai.

I didn’t mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model’s output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don’t know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It’s probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user’s competence.

I have to ask but what "both regions"? This is all happening on a live camera feed, there is only one region, that of the camera feed or rather, the person's insides that they're investigating, is the only region. If CADe sees something it thinks it's a polyp and highlights it on the camera feed for the health professional to look further, there isn't any other region. CADe isn't producing a different model of someones insides and putting that on the camera feed. However It is producing a rectangle just to highlight something, but it still all falls within the region of the camera feed of someones insides. Either way a health professional still has to look further inside someone and investigate.

I can see the argument of like, someone performing a colonscopy with CADe, mainly relying on this program to highlight things while just passively looking around, but that is rather speaking of the individual than the AI and that amounts more to negligence. Since another thing to note is that there are also false positives as well, so even if someone just relying on it to highlight something, they still have to investigate further instead of just taking it at face value. Which still requires competency like to determine if that mass is a polyp or something normal. However another thing but like, nothing stopping a health professional without CADe being negligent as well. Like missing more subtle polyps because they didn't immediately recognized it since it's not what they're used to seeing and moving on, or alternatively, at first glance seeing what they think is a polyp, but then it turns out it not after further investigation, or if their being neglect, deciding it just needs to go without further investigation despite it not being a polyp but calling a polyp at the end of the day.

Either way, I don't really see what CADe changes much besides just trying to get health professionals to further investigate something. The only thing that I can see this being an issue in regards to CADe, is if they couldn't access the service due to technical issues or other reasons, then yes, deskilling in this instance would be a problem, but on other hand they could just reschedule if it's a tech issue or just wait until it's back up, but that still sucks for the patient considering the preparation for such exams. However that's only an issue if over reliance is an issue, which I can also see maybe over reliance being a problem to an extent, but at the same time, CADe makes health professionals investigate further since it not solving anything, and just because they use it doesn't mean their competency is all thrown out the window.

Besides those things can likely be addressed in different ways to lower deskilling while still using CADe. Since CADe in general does seem to be helpful to an extent, since it's just another tool in the tool box, and a lot of this criticisms like with attention or deskilling, just falls outside of AI and into more general stuff that isn't unique to AI, but tools and technology in general.

[-] HexReplyBot@hexbear.net 2 points 2 days ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[-] VILenin@hexbear.net 21 points 2 days ago

Chronic AI use has functionally the same effects as deep frying your brain

[-] MizuTama@hexbear.net 18 points 2 days ago

I mean, you're outsourcing all of your mental labor, some cases good, some cases bad. Not like this particular case listed about is new to more modern, AI either, rudimentary forms have been causing skill atrophy for decades.

It's like me complaining I get worse at my second language when I click on English subs for a few months,

[-] VILenin@hexbear.net 3 points 2 days ago

Not medically related but educators I know are telling me that students becoming entirely dependent on AI is resulting in an issue that’s far worse than just forgetting a second language if you don’t use it. It’s like they’re forgetting how to think, or how to organize their thoughts, and make independent assessments.

[-] MizuTama@hexbear.net 1 points 1 day ago

I mean, those are still mental skills, even if we typically consider them so ubiquitous that they are thought of as typical function. I know children with particularly overbearing parents have previously had issues with independent assessments due to not having to, or sometimes being alone to make their own decisions. If you sit there and go, "grok is this true?" For every single idea you hear and take the cyber prophet at its word I wouldn't be surprised if you essentially cook your brain, as most critical function are still skills that can weaken and atrophy.

If they weren't, education wouldn't be able improve assessment abilities. It's worse because the skills are far more critical to function, I can live without my second language at the end of the day and it probably won't have catastrophic consequences.

The second language thing was the quickest analogy I could think of, something closer is how post-covid, educators I know mentioned students were far less socially capable as during isolation they essentially missed building some social skills and those they did have atrophied, leaving them relatively socially incompetent compared to students that predated COVID lockdown.

There is a similarity for students with LLM use (also just noting hear were probably talking very differently technologies in this example than what the doctors are using, AI is being used as an obfuscating term that is technically true. We're talking about the harm caused by bullet trains and lifted diesel pickup trucks like they're the same exact thing because they both transport you somewhere), they don't build some of the skill needed if they've been using it since it's gone mainstream, (3ish years) and those skills they did have are probably weakened compared to where they were before.

[-] SunsetFruitbat@lemmygrad.ml 2 points 2 days ago* (last edited 2 days ago)

Except it doesn't, no more than using a computer "deep fries" someone brain, and that just feels like a gross way to frame this stuff to in terms of "deep frying" which just feels dehumanizing. Besides that, to go to this study, it's mainly seems to be referencing something like image recognition, and the AI in question has less to do with generative ai or llms. Especially since the study don't even mention llms.

and to go back to that news article, they reference this comment by Omer Ahmad near the bottom
https://info.thelancet.com/hubfs/Press%20embargo/AIdeskillingCMT.pdf

Computer-aided polyp detection (CADe) in colonoscopy represents one of the most extensively evaluated uses of AI in medicine, demonstrating clinical efficacy in multiple randomised controlled trials (RCTs).”

using this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence

These systems are based on software that, as the colonoscope snakes through the colon, scans the tissue lining it. The CAD software is “trained” on millions of images from colonoscopies, allowing it to potentially recognize concerning changes that might be missed by the human eye. If its algorithm detects tissue, such as a polyp, that it deems suspicious, it lights the area up on a computer screen and makes a sound to alert the colonoscopy team.

so how exactly is this causing "deep frying"?

[-] VILenin@hexbear.net 4 points 2 days ago

I’m struggling to see how anything you’ve written refutes what I said.

You quoted descriptions of what the technology is and how it works, but that doesn’t say anything about what effects it has on cognitive abilities. Of course the introductory paragraph from an article about the study doesn’t prove anything. It’s explaining what it’s talking about before it discusses its cognitive effects because otherwise the reader would be confused.

Here’s a quote following the section you quoted, which is the next paragraph after the one you quoted:

Krzysztof Budzyń and colleagues add further complexity to the issues surrounding the clinical adoption of CADe by suggesting that continuous exposure to AI might impair endoscopic performance in non-AI assisted colonoscopy. […] Remarkably, an absolute decline of –6·0% (95% CI –10·5 to –1·6; p=0·009) in ADR (from 28·4% [226 of 795] to 22·4% [145 of 648]) was observed in standard, non-AI assisted colonoscopies performed after the introduction of AI.

These findings temper the current enthusiasm for rapid adoption of AI-based technologies such as CADe and highlight the importance of carefully considering possible unintended clinical consequences. Although previous experimental studies have alluded to negative modification of behaviour after AI exposure, the study by Budzyń and colleagues provides the first real-world clinical evidence for the phenomenon of deskilling, potentially affecting patient-related outcomes. Crucially, the findings also encourage an essential re-evaluation of previous RCTs involving CADe. The comparative control arm of previous studies consisted of standard non-AI assisted colonoscopies, assumed to represent baseline performance. However, previous or concurrent AI exposure might have impaired performance during non-AI assisted colonoscopies, due to deskilling, suggesting that the apparent superiority of AI in these studies could be an artefact or at least amplified by this phenomenon.

—-

no more than using a computer "deep fries" someone brain

Yes outsourcing your mental labor to non-AI computer programs is also bad for your brain. Google Maps killed everyone’s ability to navigate. AI is making it worse. It’s getting run over by a Honda Civic versus getting run over by a fully loaded truck. Obviously skill atrophy isn’t unique to AI.

Just using a computer doesn’t cause the same level of cognitive impairment as outsourcing the majority of your mental labor.

(And yes, these additional studies are about LLMs, but the fundamental issue of outsourcing your mental labor remains the same.)

gross way to frame this stuff to in terms of "deep frying" which just feels dehumanizing.

What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.

[-] SunsetFruitbat@lemmygrad.ml 1 points 1 day ago* (last edited 1 day ago)

I think it does since to go to this AI in question, it is just simply image recognition software. I don’t exactly see how it affects cognitive abilities. I can see how it can affect skills, sure, and perhaps there could be an over reliance on it! but for cognitive abilities themselves, I don't see it. Something else to, it’s important to note that this is in reference to non AI assisted, and it doesn’t necessarily mean it’s bad. Like to go to the news article under this all.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

and to go back to that one comments from Omar article

A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

It could be argued that AI helped more. However I think a few better questions is, if AI is helping health professional detect things more, what is the advantage of going back to non ai assisted then? Why should non ai assisted be preferred if ai assisted is helping more? Is this really a problem, and what could help if this is a problem? I think it is clear that it does help to an extent, so just getting rid of it doesn't seem like a solution, but I don't know! I'm not a health professional who works with this stuff or is involved in this work.

There is this video that covers more about CADe here https://www.youtube.com/watch?v=n9Gd8wK04k0 titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES

I just genuinely don't see what is wrong with CADe especially if it helps health professionals catches things that they might of missed to begin with, and like again CADe is simply just highlight things for health professionals to investigate further, how is there something wrong with that?

To add, just because something being mentally outsourced doesn’t necessarily mean that's bad. I don’t think google maps killed people ability to navigate, it just simply made it easier no? Should we just go back to compass and maps? Or even further, just go by navigation by the stars and bust out our sextants? Besides, mentally offloading can be good and allowing us to free up to do more, it just depends on what or what the end goal is? I don't necessarily see what is wrong with mentally offloading things.

I also don't understand your example with getting run over? I wouldn't want to get hit by any vehicles, since both can kill or cause life long injuries.

I’m not going to go into those other articles much since it’s veering into another topic, but I do understand LLM have a tendency to cause people to become over reliant on it or take it at face value, but I don’t agree with any notion that it's doing things, like making things worse or causing something as serious as cognitive impairment since that is a very big claim and like millions and millions of people are using this stuff. I do think however, there should be more public education on these things like using LLM right and not taking everything it generates at face value. To add, with a lot of those studies I would be interested to what studies are coming out of China to, since they also have this stuff to

What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.

Somehow I was suppose to get that from this?

Chronic AI use has functionally the same effects as deep frying your brain

That's a bit unfair to assume that I'm somehow suppose to get that just based off that single sentence, because what you said a lot different than the other, and with that added context, forgive me! there nothing wrong with that! with that added context.

cw: ableism

It just, I really don’t like it when criticism of AI stem into pretty much people saying others are getting "stupid", since besides the ableism there, millions of people use this stuff, and it also just reeks of like how to word this. “Everyone around me is a sheep and I'm the only enlighten one” kind of stuff, especially since people aren’t stupid, nor are the people who use any of this stuff either, and I just dislike this framing especially when it framed as this stuff causing "brain damage", when it's not and your comment without that added context felt like it was saying that.

[-] LeeeroooyJeeenkiiins@hexbear.net 22 points 2 days ago

No shit, you stop doing a thing you practice all the time and you get worse at it. If the AI can find tumors at an equal or better rate than doctors can, to answer if this is something you should give a shit about or not, ask yourself the following question- is the doctor of a job to "find tumors," or do they have, idk, other responsibilities that can be focused on after offloading that specific job to computers

Like you could have the exact same article about doctors getting used to laparoscopic surgery no longer being as good at using hand held scalpels, but I doubt any of you would walk away with the take of "wow laparoscopy should be banned!!" or w/e

[-] 7bicycles@hexbear.net 15 points 2 days ago

is the doctor of a job to "find tumors," or do they have, idk, other responsibilities that can be focused on after offloading that specific job to computers

This is too small scale. It is the healthcare systems jobs to find tumors, but that's bigger than a doctor. Therein lies the problem, like I'm sure AI can do a good enough - even better - job of it than the guy who's looking at your X-Rays after his 49 hour shift, no doubt. But that shit should be run by a nonprofit or government agency which is definitely not the case here. Otherwise, there's the very real possibility of market capture. They do a bang up job of it until every clinic doesn't know how to do it by hand anymore, jack up the prices or make the services worse. This needs to be open source and run by something that is not legally profit oriented.

As it stands, I think this is winning the battle, losing the war.

[-] RedWizard@hexbear.net 17 points 2 days ago

Otherwise, there's the very real possibility of market capture. They do a bang up job of it until every clinic doesn't know how to do it by hand anymore, jack up the prices or make the services worse

The headlines write themselves. In the next 10 years, you'll see NYT headlines that say things like "Deaths from cancer are on the rise, dispite technological advancement in detection, and no one knows why."

[-] KuroXppi@hexbear.net 5 points 2 days ago* (last edited 2 days ago)

!remindme10yrs

[-] BodyBySisyphus@hexbear.net 14 points 2 days ago* (last edited 2 days ago)

Yeah, I feel like there's a little bit more to unpack in the study they cite:

Between Sept 8, 2021, and March 9, 2022, 1443 patients underwent non-AI assisted colonoscopy before (n=795) and after (n=648) the introduction of AI (median age 61 years [IQR 45–70], 847 [58·7%] female, 596 [41·3%] male). The ADR [cancer detection rate] of standard colonoscopy decreased significantly from 28·4% (226 of 795) before to 22·4% (145 of 648) after exposure to AI, corresponding with an absolute difference of –6·0% (95% CI –10·5 to –1·6; p=0·0089).

So prior to the introduction of AI the success rate was 28% and after AI it dropped to 22%, so a six-point drop. Meanwhile, another study cited in the paper said that ADR went up 8 points using an AI tool. Also, while the sample of patients is large, the sample of doctors was small, only 14, and one had a 40% drop in ADR. There weren't any control endoscopists who had never done AI to see if there's some natural temporal variation or if the poor performer was just not eating his wheaties.

[-] RedWizard@hexbear.net 11 points 2 days ago

I'm not sure that this is an apt comparison, since the laparoscopy tools would still present themselves to the surgeon as "the instrument, which the worker animates and makes into his organ with his skill and strength, and whose handling therefore depends on his virtuosity." Since laparoscopy isn't an autonomous system.

These AI systems take the entire process of identifying cancer and automate it. The doctors in this position are no longer required to have this knowledge since the AI "possesses [the] skill and strength in place of the [doctor]" becoming the "virtuoso". Under our capitalist system, this leaves little incentive to continue the process (given mass adoption of the technology) of expending capital on the training necessary. "Moreover, it must be remembered that the more simple, the more easily learned the work is, so much the less is its cost to production, the expense of its acquisition, and so much the lower must the wages sink – for, like the price of any other commodity, they are determined by the cost of production."

Obviously, this is only one task among many tasks the specialist performs, and there will still be a need for the whole of the specialist's skills. It does, however, produce worse outcomes if, say, the specialist is moved to a facility that lacks this technology after a significant amount of time relying on it. This isn't an issue for laparoscopy in "developed" countries; it is a nearly ubiquitous technology, making the skills of surgeons trained in laparoscopy very portable.

There is ultimately still a net positive here, since these models can be more accurate than humans at identifying cancer. It, however, is another illustration of the cognitive impact AI has on people who engage with it regularly. It illustrates that the sublimation process described in Capital also applies to these AI systems, as machines in the labor process.

Since laparoscopy isn't an autonomous system.

These AI systems take the entire process of identifying cancer and automate it

It is effectively the same thing, controlling tools with a controller is automating numerous processes that they will 10000% not be able to perform as deftly with their hands even if the steps performed are exactly the same

And, again, is recognizing cancer in a scan the doctor's primary function, or is it knowing how to treat it once its presence is established? You could outsource all the radiography to another human being and still have the same outcome of "the doctor isn't as good at recognizing it anymore." There's a cost benefit analysis to be done of is it better for a doctor to spend a lot of time looking at these scans, and be better at looking at them as a result, or for them to do other shit with their time

[-] RedWizard@hexbear.net 8 points 2 days ago* (last edited 2 days ago)

It is effectively the same thing, controlling tools with a controller is automating numerous processes that they will 10000% not be able to perform as deftly with their hands even if the steps performed are exactly the same

It's not effectively the same thing at all. One is an entirely new skill (liproscopy); the other is the elimination of an entire skill (AI detection of cancer). The laproscopy does nothing at all unless the surgen is there to operate it, and the use of laproscopy still demands the previous skills required to perform surgery in the first place.

You could outsource all the radiography to another human being and still have the same outcome of "the doctor isn't as good at recognizing it anymore."

You wouldn't need to offload the entire process to another human being; you would simply eliminate that human from the labor force. In your scenario, there is still a human with the skill to identify cancer, whereas the AI process begs to have positions eliminated, potentially leaving no one available for that task. The obvious issue with that is leaving the task fully in the hands of a black box, owned and operated by a for-profit corporation, whose insentives are dictated by the mechanics of capitalism and not the hipocratic oath or some other human-centered demand.

Regardless, it would seem you appear to have ignored the part of my comment that states:

Obviously, this is only one task among many tasks the specialist performs, and there will still be a need for the whole of the specialist's skills. It does, however, produce worse outcomes if, say, the specialist is moved to a facility that lacks this technology after a significant amount of time relying on it. This isn't an issue for laparoscopy in "developed" countries; it is a nearly ubiquitous technology, making the skills of surgeons trained in laparoscopy very portable. [...] There is ultimately still a net positive here, since these models can be more accurate than humans at identifying cancer."

And that comes with the huge caviate that @7bicycles@hexbear.net points out in his comment:

This is too small scale. It is the healthcare systems jobs to find tumors, but that's bigger than a doctor. Therein lies the problem, like I'm sure AI can do a good enough - even better - job of it than the guy who's looking at your X-Rays after his 49 hour shift, no doubt. But that shit should be run by a nonprofit or government agency which is definitely not the case here. Otherwise, there's the very real possibility of market capture. They do a bang up job of it until every clinic doesn't know how to do it by hand anymore, jack up the prices or make the services worse. This needs to be open source and run by something that is not legally profit oriented.

It's not effectively the same thing at all. One is an entirely new skill (liproscopy); the other is the elimination of an entire skill (AI detection of cancer).

No, it's the same damn thing. Laparoscopic surgery uses controllers and robots and shit and a surgeon who does it all the time instead of traditional surgery is going to lose skill in performing surgery in the same way that someone relying on automation to parse radiographic test results would lose their ability to read them properly

Like do you really think someone playing COD is going to retain skills with a gun like come on dawg

You wouldn't need to offload the entire process to another human being

It was a direct comparison to the use of AI for this purpose in general. The "other human being" is effectively the AI and giving them the task of parsing radiographic test results would do the exact same damn thing to the doctor in this case, diminish their ability to read it themselves.

And that comes with the huge caviate that @7bicycles@hexbear.net points out in his comment:

Do you need to steal someone else's good point to make an argument? I didn't address it then because it's an entirely different argument ("this is bad because of capitalism") that i agreed with

[-] leftAF@hexbear.net 7 points 2 days ago* (last edited 2 days ago)

Yeah it is not the end of the world, not much is. Humans didn't even have endoscopy until the past couple hundred years. We'll just always hope that the human labor saved by computerization was worth offloading, bugs aside if all goes well it should be more standard and there won't be anything missed/delayed as a result. Outcomes materially affecting individuals are most sympathetic but there can be lost innovations from having fewer people exposed to inner working of a practice and potentially doing something no computer or human has before. Maybe endoscopy was worth it, I'm not an expert.

[-] segfault11@hexbear.net 15 points 2 days ago

do you think the AMA would limit the number of AI doctors

[-] Clippy@hexbear.net 11 points 2 days ago

i thought the preview was kind of funny

[-] RedWizard@hexbear.net 11 points 2 days ago

In the machine, and even more in machinery as an automatic system, the use value, i.e. the material quality of the means of labour, is transformed into an existence adequate to fixed capital and to capital as such; and the form in which it was adopted into the production process of capital, the direct means of labour, is superseded by a form posited by capital itself and corresponding to it. In no way does the machine appear as the individual worker's means of labour. Its distinguishing characteristic is not in the least, as with the means of labour, to transmit the worker's activity to the object; this activity, rather, is posited in such a way that it merely transmits the machine's work, the machine's action, on to the raw material -- supervises it and guards against interruptions. Not as with the instrument, which the worker animates and makes into his organ with his skill and strength, and whose handling therefore depends on his virtuosity. Rather, it is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own in the mechanical laws acting through it; and it consumes coal, oil etc. (matières instrumentales), just as the worker consumes food, to keep up its perpetual motion. The worker's activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite. The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker's consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself. The appropriation of living labour by objectified labour -- of the power or activity which creates value by value existing for-itself -- which lies in the concept of capital, is posited, in production resting on machinery, as the character of the production process itself, including its material elements and its material motion. The production process has ceased to be a labour process in the sense of a process dominated by labour as its governing unity. Labour appears, rather, merely as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system; subsumed under the total process of the machinery itself, as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery, which confronts his individual, insignificant doings as a mighty organism.

The greater division of labour enables one labourer to accomplish the work of five, 10, or 20 labourers; it therefore increases competition among the labourers fivefold, tenfold, or twentyfold. The labourers compete not only by selling themselves one cheaper than the other, but also by one doing the work of five, 10, or 20; and they are forced to compete in this manner by the division of labour, which is introduced and steadily improved by capital.

Furthermore, to the same degree in which the division of labour increases, is the labour simplified. The special skill of the labourer becomes worthless. He becomes transformed into a simple monotonous force of production, with neither physical nor mental elasticity. His work becomes accessible to all; therefore competitors press upon him from all sides. Moreover, it must be remembered that the more simple, the more easily learned the work is, so much the less is its cost to production, the expense of its acquisition, and so much the lower must the wages sink – for, like the price of any other commodity, they are determined by the cost of production. Therefore, in the same manner in which labour becomes more unsatisfactory, more repulsive, do competition increase and wages decrease.

[-] jUzzo6@hexbear.net 5 points 2 days ago

I utterly convinced that capitalists will use AI to proletarialize many white collar jobs and that we will have what Marx described in Kapital 1 again. We all are weavers now.

“Poverty unknown except for times of war or famine”

this post was submitted on 13 Aug 2025
51 points (100.0% liked)

technology

23912 readers
222 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS