cross-posted from: https://lemmy.ml/post/34581821
paywall bypass: https://archive.is/whVMI
the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
article text:
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study
By Harry Black
August 12, 2025 at 10:30 PM UTC
Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.
AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.
Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.
The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.
They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.
Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.
What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.
“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.
A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.
Okay you say this but these tools are privately owned. What happens when one day the provider slams them with a 1000% price increase? They can either pay or go back to doctors who detect cancer even worse. It gives these AI companies undue influence and turns a tool into a crutch and an addiction which can be leveraged to drive up healthcare costs and punish providers who don't play ball perhaps resulting in deaths from doctors in systems that don't have access to the tool because they're in a payment dispute with it or they had it but stopped paying for it and patients may not know any of this.
This is a nightmare for human beings who have fought hard to grow smart, to be intelligent as a species and to have educated professionals who have learned to use their brains be instead trained by these machines to stop using their brains, to atrophy them, to become dependent on these systems and worse than before the moment they are removed.
It will be used to attack the wages of doctors and I guarantee that they won't be compensated with cheaper schooling (doctors need at least 6 years of university plus additional years in training before being able to practice on their own, it's an immense expense and burden in a time of rising costs and huge debt). Which will lead to shortages of doctors and they'll be replaced with AI and nurses not up to the task and we'll be told this is fine. Having access to a thinking human being may become a gated luxury that few insurance companies want to shell out for until after you've been evaluated by AI systems several times and only IF those systems deem it necessary. Some AI systems will make mistakes that kill patients and insurance companies will be fine with this as a quickly dead patient is usually cheaper than paying for months or years of treatments and/or surgeries so they'll have a perverse incentive to push patients towards those systems. Doctors take an oath not to do harm, not all take that as seriously as they should but usually there's some compassion there whereas a computer system would not care one bit if you're denied and unlike a doctor won't fight for you against the insurance companies.
Paragraph one says things getting better is bad because what if we stop.
Paragraph two is bemoaning the abacus for ruining mental math.
Paragraph three blames a new gizmo for the system as it exists.
All machine learning is bad and scary and we should get rid of all of it. Clearly we should stay exactly where we are because the system works so well as it is.
Its crazy how much damage LLMs have done to ML optics. We were quietly using ML to vastly improve medicine over the past decade and now suddenly someone hears AI and they think chatgpt is telling their doctor what to do
It's important to remember that this is not generative AI. It doesn't say who owns it though.
A competitor could easily be developed with access to the same data.
For the record, I didn't read all that because it's too long.
is there any AI usage in this scenario that you would accept?
Not the way ai companies are making them no. They arent aiming for accuracy they are aiming for profit