15
top 24 comments
sorted by: hot top controversial new old
[-] Solaris1220@lemmy.world 4 points 2 weeks ago

But that takes actual work. See how the LLM systems are constantly wrong? That is because after you get to about 80% accuracy the rest will murder you.

This would take time and actual investment. Not something big tech can handle.

[-] Mountainaire@lemmy.world 1 points 1 week ago

~~can~~ wants to

FTFY

[-] Thorry@feddit.org 0 points 1 week ago

Agreed, this is exactly what reinforcement learning and neural networks are good at. Calling them AI is beyond dumb, but hey marketing will be marketing. It's pattern recognition, which is cool, but nobody would call that intelligent otherwise. Another big issue with the marketing is they only report on the success rate and not the failure rate. Doctors praise the cases being caught, but dislike the models pointing out stuff that is clearly not a tumor. It wastes time for people already short on time. These models also risk doctors becoming over reliant on them, even though they can have serious blind spots and thus miss stuff a doctor would have caught. Or the other way around, have people receive treatment (often not without risk, discomfort and cost to the patient), where none was needed. The thing that bothers me the most is how it's always framed as a win for AI. Like see AI is good at diagnosing cancer (which then gets extrapolated to curing cancer for some bizarre reason), so that useless chat bot is also good somehow. Because AI.

[-] Nurse_Robot@lemmy.world 3 points 2 weeks ago

That's particularly useful for pancreatic cancer, if it's accurate, reliable, cost effective, and practical in the real world.

[-] raspberriesareyummy@lemmy.world -1 points 2 weeks ago

In other words: not useful at all. (Didn't read the article because it already misuses the AI acronym in the title, indicating it was written by some idiot with nothing to say)

[-] ivan@piefed.social 1 points 2 weeks ago

Article actually describes it well enough, how scientists trained a model on data from CT scans of patients who were treated for other conditions some time before being diagnosed with pancreatic cancer.

[-] raspberriesareyummy@lemmy.world -1 points 2 weeks ago

In my first sentence, I was referring to the combination of adjectives in the question by previous commentor. No one in today's health care systems is gonna pay preemptive screenings for saving peasant lives like yours or mine.

[-] otter@lemmy.ca 1 points 2 weeks ago

There are healthcare systems in the world other than the one in the usa

[-] raspberriesareyummy@lemmy.world -1 points 1 week ago

Yes, but all of them are worsening in the interests of profit, in case you weren't following the news. Germany is just scrapping skin cancer prevention, thanks to our corrupt fucks in government.

[-] stsquad@lemmy.ml 0 points 2 weeks ago

If course you do - if the cost of treating the patient down the line is going to cost you more. Public health systems have a vested interest in healthier citizens.

[-] saimen@feddit.org 0 points 2 weeks ago

Problem is they are probably from the US which doesn't really have a public healthcare system.

[-] Nurse_Robot@lemmy.world 0 points 2 weeks ago

You used the AI acronym in the same way, so I'm confused by your arrogant sounding statement

[-] raspberriesareyummy@lemmy.world 0 points 2 weeks ago

Did I though? Are they using a model with any kind of abstraction layer that actually understands relationships between objects?

[-] FauxLiving@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Yes and yes.

These are questions that you wouldn't have to ask if you didn't smugly decide that you didn't need to read before contributing your opinion.

If you can't be arsed to read the article, here's the peer reviewed paper in the British Medical Journal: https://www.science.org/doi/10.1126/science.adz4433

[-] Telodzrum@lemmy.world -2 points 2 weeks ago* (last edited 1 week ago)

It’s not, though and that’s the issue.

False positives are at least as dangerous as false negatives and AI solutions like this have massive problems with over diagnosing.

EDIT: It’s really fun to have a bunch of home-bound tech workers try to talk down to me about the science behind and practice of medicine.

[-] FauxLiving@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

False positives are at least as dangerous as false negatives and AI solutions like this have massive problems with over diagnosing.

Absolutely 100% wrong.

In pancreatic ductal adenocarcinoma, a false positive means a follow-up scan. A false negative means death, the 5-year survival is near zero once it's caught late, but exceeds 80% when caught early.

In the study, the radiologists' lower false positive rate is achieved by missing 78% of cancers. That's not a safer trade-off, it's just a different way to fail. "Overdiagnosis" also requires a disease that might not have harmed the patient, PDA doesn't have a harmless form. Every missed case is a lost life while every false positive is an extra doctor's appointment.

This system detects twice as many cancers and was flagging them, on average, 675 days (nearly 2 years!) before clinical detection.

[-] Telodzrum@lemmy.world -3 points 1 week ago

You selected a single pathology which supports your otherwise specious and false argument.

Be better.

[-] FauxLiving@lemmy.world 1 points 1 week ago

If I'm wrong, then feel free to support your position with evidence or an argument showing that my statement was specious.

I linked the, peer-reviewed, paper which contains the data that supports my statements on the topic.

You've made two conclusory statements and immediately resorted to insulting comments when challenged.

There is not a single aggressive pancreatic cancer where a false negative is more dangerous than a false positive.

Percutaneous biopsy has a mortality rate of approximately 0.2% even relatively non-malignant pancreatic cancers (say Solid pseudopapillary neoplasm) have 10-year survival rates in adults of around 88% and that number is from cases which received surgical intervention and chemotherapy something that would not happen with a false negative.

So even in the worst case, the false negative multiple times more deadly. A false positives' most likely outcome is pancreatitis from the biopsy procedure.

[-] unpossum@sh.itjust.works 0 points 1 week ago

They selected the pathology that’s the topic of the post to support their on-topic argument. Be better, indeed.

[-] Sprocketfree@sh.itjust.works 1 points 1 week ago

Really wish people could be better collaborators instead of just being jerks. Kills any value in the conversation.

[-] Nurse_Robot@lemmy.world 0 points 1 week ago

Stop being a bad person, please.

[-] Nurse_Robot@lemmy.world 1 points 1 week ago

You're rude, arrogant, and wildly incorrect from a medical standpoint. Please delete your message and don't make comments like this in the future.

[-] Telodzrum@lemmy.world -1 points 1 week ago

No, I'm not. Sorry the ChatGPT response you got about medical science and outcomes from differing pathologies and the extremely serious dangers of pharmacological treatment from even a correct positive, to say nothing of the terrors visited on patients by false positives.

this post was submitted on 03 May 2026
15 points (94.1% liked)

Technology

84700 readers
539 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS