-3
you are viewing a single comment's thread
view the rest of the comments
[-] A_A@lemmy.world 1 points 12 hours ago* (last edited 12 hours ago)

... either way people are going to die due to preventable misdiagnosis.

This is not how this research is done. You can make diagnosis without applying them to patient. You can, for example, go back to database of past cases, then, create diagnosis for these past cases and see in the present if they were right or wrong. This way (just on example) you can create statistics. No one has to die. You don't know how this is done. (frankly I don't know a lot either ... those people writing the article probably know much more than you and I).

After that, if we know that the A.i. is superior in these cases, (i agree this is a big "if"), then, i would choose the diagnosis from it and i would take responsibility for my choice. I wouldn't sue any doctor and i would still be at an advantage because of this better choice.

But maybe we cannot agree on this topic. I wish you the very best, take care ๐Ÿ˜Œ

Maybe in a country without private medical care, but your idea doesn't work in the US.

AI is already, currently, this second, in use in the medical insurance industry and has statistically killed at least one person.

Expanding that to the part of the medical business that has some scientific backing is essentially societal suicide, unless you're rich enough to afford a real human doctor.

[-] A_A@lemmy.world 2 points 10 hours ago

Whoops, sorry, no ... I didn't have USA in mind while writing ... so in there : yes, "healthcare" is completely fucked up.

this post was submitted on 19 Dec 2024
-3 points (41.2% liked)

Futurology

1851 readers
21 users here now

founded 1 year ago
MODERATORS