18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Feb 2024
18 points (100.0% liked)
SneerClub
1003 readers
1 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
founded 2 years ago
MODERATORS
I keep flashing back to that idiot who said they were employed as an AI researcher that came here a few months back to debate us. they were convinced multimodal LLMs would be the turning point into AGI — that is, when your bullshit text generation model can also do visual recognition. they linked a bunch of papers to try and sound smart and I looked at a couple and went “is that really it?” cause all of the results looked exactly like the section you quoted. we now have multimodal LLMs, and needless to say, nothing really came of it. I assume the idiot in question is still convinced AGI is right around the corner though.
I caught a whiff of that stuff in the HN comments, along with something called "Solomonoff induction", which I'd never heard of, and the Wiki page for which has a huge-ass "low quality article" warning: https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference.
It does sound like that current AI hype has crested, so it's time to hype the next one, where all these models will be unified somehow and start thinking for themselves.
Solomonoff induction is a big rationalist buzzword. It's meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.
It would be cool if you could build this, but it's literally impossible. The induction method is provably incomputable.
The hope is that if you build a shitty approximation to solomonoff induction that "approaches" it, it will perform close to the perfect solomonoff machine. Does this work? Not really.
My metaphor is that it's like coming to a river you want to cross, and being like "Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I'll be able to get across". You aren't Moses. Build a bridge.
"Solomonoff induction" is the string of mouth noises that Rationalists make when they want to justify their preconceived notion as the "simplest" possibility, by burying all the tacit assumptions that actual experience would let them recognize.