28

this time in open letter format! that'll sure do it!

there are "risks", which they are definite about - the risks are not hypothetical, the risks are real! it's totes even had some acknowledgement in other places! totes real defs for sure this time guize

you are viewing a single comment's thread
view the rest of the comments
[-] BigMuffin69@awful.systems 21 points 6 months ago* (last edited 6 months ago)

No, they never address this. And as someone who personally works on large scale optimization problems for a living, I do think it's difficult for the public to understand, that no, a 10000 IQ super machine will not be able to just "solve these problems" in a nano second like Yud thinks. And it's not like well, the super machine will just avoid having to solve them. No. NP hard problems are fucking everywhere. (Fun fact, for many problems of interest, even approximating the solution to a given accuracy is NP-hard, so heuristics don't even help.)

I've often found myself frustrated that more computer scientist who should know better simply do not address this point. If verifying solutions is exponentially easier than coming up with them for many difficult problems (all signs point to yes), and if a super intelligent entity actually did exist (I mean does a SAT solver count as a super intelligent entity?), it would probably be EASY to control, since it would have to spend eons and massive amounts of energy coming up with its WORLD_DOMINATION_PLAN.exe, but you wouldn't be able to hide a super computer doing this massive calculation, and someone running the machine seeing it output TURN ALL HUMANS INTO PAPER CLIPS, would say, 'ah, we are missing a constraint here, it thinks that this optimization problem is unbounded' <- this happens literally all the time in practice. Not the world domination part, but a poorly defined optimization problem that is unbounded. But again, it's easy to check that the solution is nonsense.

I know Francois Chollet (THE GOAT) has talked about how there are no unending exponentials and the faster growth the faster you hit constraints IRL (running out of data, running out of chips, running out of energy, etc... ) and I've definitely heard professional shit poster Pedro Domingos explicitly discuss how NP-hardness strongly implies EA/LW type thinking is straight up fantasy, but it's a short list of people who I can think of off the top of my head who have discussed this.

Edit: bizarrely, one person who I didn't mention who has gone down this line of thinking is Illya Sutskever; however, he has come to some frankly... uh... strange conclusions -> the only reason to explain the successful performance of ML is to conclude that they are Kolmogorov minimizers, i.e., by optimizing for loss over a training set, you are doing compression which done optimally is solving an undecidable problem. Nice theory. Definitely not motivated by bad sci-fi mysticism imbued with pure distilled hopium. But from my arm-chair psychologist POV, it seems he implicitly acknowledges for his fantasy to come true, he needs to escape the limitations of Turing Machines, so he has to somehow shoehorn a method for hyper computation into Turing Machines. Smh, this is the kind of behavior reserved for aging physicist, amirite lads? Yet in 2023, it seemed like the whole world was succumbing to this gas lighting. He was giving this lecture to auditoriums filled with tech bro shilling this line of thinking to thunderous applause. I have olde CS prof friends who were like, don't we literally have mountains of evidence this is straight up crazy talk? Like you can train an ANN to perform addition, and if you can look me straight in the eyes and say the absolute mess of weights that results looks anything like a Kolmogorov minimizer then I know you are trying to sell me a bag of shit.

[-] blakestacey@awful.systems 14 points 6 months ago

"Computational complexity does not work that way!" is one of those TESCREAL-zone topics that I wish I had better reading recommendations for.

[-] o7___o7@awful.systems 11 points 6 months ago* (last edited 6 months ago)

Smh, this is the kind of behavior reserved for aging physicists, amirite lads?

Bah Gawd! That man has a family!

[-] Soyweiser@awful.systems 10 points 6 months ago

Ow god im not alone in thinking this, thank you! I'm not going totally crazy!

[-] BigMuffin69@awful.systems 9 points 6 months ago* (last edited 6 months ago)

I got you homie

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

this post was submitted on 04 Jun 2024
28 points (100.0% liked)

SneerClub

1003 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS