Woops. Sorry mods. Reposting with links.
https://nitter.space/thelillygaddis/status/1904852790460965206#m
https://archive.is/vuiMj
https://x.com/thelillygaddis/status/1904852790460965206#m
Any AI model is technically a black box. There isn't a "human readable" interpretation of the function.
The data going in, the training algorithm, the encode/decode, that's all available.
But the model is nonsensical.
That’s not true, there are a ton of observabity tools for the internal workings.
The top post on HN is literally a new white paper about this.
https://news.ycombinator.com/item?id=43495617
Thank you that's amazing
They also made a video:
https://youtu.be/Bj9BD2D3DzA
Some simpler "AI models" are also directly explainable or readable by humans.
In almost exactly the same sense as our own brains' neural networks are nonsensical :D
Yeah despite the very different evolutionary paths there's remarkable similarities between idk octopus/crow/dolphin cognition
People tweeting stuff. We allow tweets from anyone.
RULES:
Any AI model is technically a black box. There isn't a "human readable" interpretation of the function.
The data going in, the training algorithm, the encode/decode, that's all available.
But the model is nonsensical.
That’s not true, there are a ton of observabity tools for the internal workings.
The top post on HN is literally a new white paper about this.
https://news.ycombinator.com/item?id=43495617
Thank you that's amazing
They also made a video:
https://youtu.be/Bj9BD2D3DzA
Some simpler "AI models" are also directly explainable or readable by humans.
In almost exactly the same sense as our own brains' neural networks are nonsensical :D
Yeah despite the very different evolutionary paths there's remarkable similarities between idk octopus/crow/dolphin cognition