1522
Breast Cancer (mander.xyz)
you are viewing a single comment's thread
view the rest of the comments
[-] yesman@lemmy.world 59 points 3 months ago

The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI's methods are bullshit. Under no circumstance should we accept a "black box" explanation.

[-] CheesyFox@lemmy.sdf.org 23 points 3 months ago

good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It's like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

Under no circumstance should we accept a "black box" explanation.

Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

[-] thecodeboss@lemmy.world 13 points 3 months ago

Don't worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

Hey look, this took me like 5 minutes to find.

Censius guide to AI interpretability tools

Here's a good thing to wonder: if you don't know how you're black box model works, how do you know it isn't racist?

Here's what looks like a university paper on interpretability tools:

As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn't get you in trouble with the EU.

Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind's finer pleasures, but this attitude of yours is profoundly stupid. It's weak. You don't want to know? It doesn't make you curious? Why are you comfortable not knowing things? That's not how science is propelled forward.

[-] Tja@programming.dev 5 points 3 months ago

"Enough" is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn't racist.

[-] match@pawb.social 3 points 3 months ago

interpretability costs money though :v

[-] CheeseNoodle@lemmy.world 20 points 3 months ago

iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

[-] Johanno@feddit.org 6 points 3 months ago

Well in theory you can explain how the model comes to it's conclusion. However I guess that 0.1% of the "AI Engineers" are actually capable of that. And those costs probably 100k per month.

[-] Atrichum@lemmy.world 6 points 3 months ago
[-] CheeseNoodle@lemmy.world 13 points 3 months ago

This ones from 2019 Link
I was a bit off the mark, its not that the models they use aren't black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

[-] Tryptaminev@lemm.ee 4 points 3 months ago

It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

[-] MystikIncarnate@lemmy.ca 12 points 3 months ago

IMO, the "black box" thing is basically ML developers hand waiving and saying "it's magic" because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

I have a very crude understanding of the technology. I'm not a developer, I work in IT support. I have several friends that I've spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they've explained a few of the concepts to me, and I'd be lying if I said that none of it went over my head. I've done programming and development, I'm senior in my role, and I have a lifetime of technology experience and education... And it goes over my head. What hope does anyone else have? If you're not a developer or someone ML-focused, yeah, it's basically magic.

I won't try to explain. I couldn't possibly recall enough about what has been said to me, to correctly explain anything at this point.

[-] homura1650@lemm.ee 22 points 3 months ago

The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

For instance, the cutting edge in protein folding (at least as of a few years ago) is Google's AlphaFold. I'm sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is "the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions". Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

[-] Tryptaminev@lemm.ee 3 points 3 months ago

Thank you for giving some insights into ML, that is now often just branded "AI". Just one note though. There is many ML algorithms that do not employ neural networks. They don't have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

[-] 0ops@lemm.ee 2 points 3 months ago

Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself

The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.[16]

https://en.m.wikipedia.org/wiki/Artificial_intelligence

[-] match@pawb.social 9 points 3 months ago* (last edited 3 months ago)

y = w^T x

hope this helps!

[-] reddithalation@sopuli.xyz 7 points 3 months ago

our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

It feels like lots of prefessionals can't exactly explain every single aspect of how they do what they do, sometimes it just feels right.

[-] rekorse@lemmy.world -3 points 3 months ago

What a vague and unprovable thing you've stated there.

this post was submitted on 02 Aug 2024
1522 points (98.4% liked)

Science Memes

11189 readers
1727 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS