[-] BigMuffin69@awful.systems 12 points 2 months ago

smh they really do be out here believing there's a little man in the machine with goals and desires, common L for these folks

[-] BigMuffin69@awful.systems 11 points 3 months ago

Thank you. My wife is deathly allergic to shrimp, and I live by the motto

'If they send one of your loved ones to the emergency room, you send 10 of theirs to the deep fryer. '

[-] BigMuffin69@awful.systems 10 points 8 months ago

In my more innocent college days, there was a group of people doing a reading of it in the dorm lounge laughing their asses off. Ong I thought it was a hyper self-aware satire that was making fun of internet "umm ackshully" / iamverysmart posters. There's no way someone earnestly spent their time writing over half a million words on a self-insert Harry Potter fanfic as some form of mental masturbation... right?

Yud, it's not too late to say sike bro.

[-] BigMuffin69@awful.systems 13 points 8 months ago

lmaou bruv, great to know these clowns are both coping & seething

[-] BigMuffin69@awful.systems 11 points 9 months ago* (last edited 9 months ago)

my b lads, I corrected it

[-] BigMuffin69@awful.systems 11 points 9 months ago* (last edited 9 months ago)

And the number of angels that can dance on the head of a pin? 9/11

[-] BigMuffin69@awful.systems 13 points 9 months ago

Big Yud: You try to explain how airplane fuel can melt a skyscraper, but your calculation doesn't include relativistic effects, and then the 9/11 conspiracy theorists spend the next 10 years talking about how you deny relativity.

Similarly: A paperclip maximizer is not "monomoniacally" "focused" on paperclips. We talked about a superintelligence that wanted 1 thing, because you get exactly the same results as from a superintelligence that wants paperclips and staples (2 things), or from a superintelligence that wants 100 things. The number of things It wants bears zero relevance to anything. It's just easier to explain the mechanics if you start with a superintelligence that wants 1 thing, because you can talk about how It evaluates "number of expected paperclips resulting from an action" instead of "expected paperclips * 2 + staples * 3 + giant mechanical clocks * 1000" and onward for a hundred other terms of Its utility function that all asymptote at different rates.

The only load-bearing idea is that none of the things It wants are galaxies full of fun-having sentient beings who care about each other. And the probability of 100 uncontrolled utility function components including one term for Fun are ~0, just like it would be for 10 components, 1 component, or 1000 components. 100 tries at having monkeys generate Shakespeare has ~0 probability of succeeding, just the same for all practical purposes as 1 try.

(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered "much more likely" while still being not likely enough.)

An unaligned superintelligence is "monomaniacal" in only and exactly the same way that you monomaniacally focus on all that stuff you care about instead of organizing piles of dust specks into prime-numbered heaps. From the perspective of something that cares purely about prime dust heaps, you're monomaniacally focused on all that human stuff, and it can't talk you into caring about prime dust heaps instead. But that's not because you're so incredibly focused on your own thing to the exclusion of its thing, it's just, prime dust heaps are not inside the list of things you'd even consider. It doesn't matter, from their perspective, that you want a lot of stuff instead of just one thing. You want the human stuff, and the human stuff, simple or complicated, doesn't include making sure that dust heaps contain a prime number of dust specks.

Any time you hear somebody talking about the "monomaniacal" paperclip maximizer scenario, they have failed to understand what the problem was supposed to be; failed at imagining alien minds as entities in their own right rather than mutated humans; and failed at understanding how to work with simplified models that give the same results as complicated models

[-] BigMuffin69@awful.systems 10 points 1 year ago

"I hope the basilisk loves its simulated children too" 🐍 🥰

[-] BigMuffin69@awful.systems 10 points 1 year ago

Well, obviously an ASI will be able to solve the halting problem, right after it starts solving NP-hard problems in nlog(n) time. After all, if it couldn't, would it really be an ASI? Our puny human brains just can't comprehend the bigness of its IQ.

[-] BigMuffin69@awful.systems 12 points 1 year ago

Unbelievably gross 🤢 I can't even begin to imagine what kind of lunatic would treat their loved one's worth as 'just a number' or commodity to be exchanged. Frightening to think these are the folks trying to influence govt officials.

view more: ‹ prev next ›

BigMuffin69

joined 1 year ago