smh they really do be out here believing there's a little man in the machine with goals and desires, common L for these folks
Thank you. My wife is deathly allergic to shrimp, and I live by the motto
'If they send one of your loved ones to the emergency room, you send 10 of theirs to the deep fryer. '
Shared this on tamer social media site and a friend commented:
"That's nonsense. The largest charities in the country are Feeding America, Good 360, St. Jude's Children's Research Hospital, United Way, Direct Relief, Salvation Army, Habitat for Humanity etc. etc. Now these may not satisfy the EA criteria of absolutely maximizing bang for the buck, but they are certainly mostly doing worthwhile things, as anyone counts that. Just the top 12 on this list amount to more than the total arts giving. The top arts organization on this list is #58, the Metropolitan Museum, with an income of $347M."
A nice exponential curve depicting the infinite future lives saved by whacking a ceo
lmaou bruv, great to know these clowns are both coping & seething
my b lads, I corrected it
And the number of angels that can dance on the head of a pin? 9/11
Big Yud: You try to explain how airplane fuel can melt a skyscraper, but your calculation doesn't include relativistic effects, and then the 9/11 conspiracy theorists spend the next 10 years talking about how you deny relativity.
Similarly: A paperclip maximizer is not "monomoniacally" "focused" on paperclips. We talked about a superintelligence that wanted 1 thing, because you get exactly the same results as from a superintelligence that wants paperclips and staples (2 things), or from a superintelligence that wants 100 things. The number of things It wants bears zero relevance to anything. It's just easier to explain the mechanics if you start with a superintelligence that wants 1 thing, because you can talk about how It evaluates "number of expected paperclips resulting from an action" instead of "expected paperclips * 2 + staples * 3 + giant mechanical clocks * 1000" and onward for a hundred other terms of Its utility function that all asymptote at different rates.
The only load-bearing idea is that none of the things It wants are galaxies full of fun-having sentient beings who care about each other. And the probability of 100 uncontrolled utility function components including one term for Fun are ~0, just like it would be for 10 components, 1 component, or 1000 components. 100 tries at having monkeys generate Shakespeare has ~0 probability of succeeding, just the same for all practical purposes as 1 try.
(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered "much more likely" while still being not likely enough.)
An unaligned superintelligence is "monomaniacal" in only and exactly the same way that you monomaniacally focus on all that stuff you care about instead of organizing piles of dust specks into prime-numbered heaps. From the perspective of something that cares purely about prime dust heaps, you're monomaniacally focused on all that human stuff, and it can't talk you into caring about prime dust heaps instead. But that's not because you're so incredibly focused on your own thing to the exclusion of its thing, it's just, prime dust heaps are not inside the list of things you'd even consider. It doesn't matter, from their perspective, that you want a lot of stuff instead of just one thing. You want the human stuff, and the human stuff, simple or complicated, doesn't include making sure that dust heaps contain a prime number of dust specks.
Any time you hear somebody talking about the "monomaniacal" paperclip maximizer scenario, they have failed to understand what the problem was supposed to be; failed at imagining alien minds as entities in their own right rather than mutated humans; and failed at understanding how to work with simplified models that give the same results as complicated models
Unbelievably gross 🤢 I can't even begin to imagine what kind of lunatic would treat their loved one's worth as 'just a number' or commodity to be exchanged. Frightening to think these are the folks trying to influence govt officials.
Unclear to me what Daniel actually did as a 'researcher' besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel's LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole "we are taking safety very seriously, so we hired LW philosophers" and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.
Ex-OAI "governance" researcher just means they couldn't forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I'm wrong, and they have evidence that Daniel actually understands how computers work.