[-] Soyweiser@awful.systems 17 points 3 weeks ago* (last edited 3 weeks ago)

They are also concerned about t levels in athletes, and IQ comes up. Which he defends with "If you think none of these tests are actually measuring anything of value, you must need to explain why they correlate so well with life outcomes related to cognition."

I think a problem with ssc people is that they dont realize they are culture warriors.

[-] Soyweiser@awful.systems 17 points 2 months ago

Right, and then she gets less time than people who shoplifted a few times.

[-] Soyweiser@awful.systems 16 points 3 months ago* (last edited 3 months ago)

Yes, sorry, I forgot that this isn't that common term.

Also note that there is quite a lot of edge and badness in the metal scene, and a lot of this wasn't shunned/no-platformed nor criticized as hard as it should be (metal is not punk), so these bands must have been quite extreme for them to go to these kinds of measures.

[-] Soyweiser@awful.systems 16 points 4 months ago

For the people who don't know who that is Wikipedia and here is a reliable site.

He is a very frequent commenter in the whole of the LW/Rationalist sphere. iirc he sometimes gets banned when he lets the mask slip a bit too much, but they always let him back in.

Wonder if Marxbro ever got unbanned. Rip you damn dirty commie, do miss seeing your obsessive monofocus posts pop up from time to time.

[-] Soyweiser@awful.systems 16 points 4 months ago

Would also be great if the article he talks about doesn't start with "I no longer endorse all the statements in this document.[emp mine] I think many of the conclusions are still correct, but especially section 1 is weaker than it should be, and many reactionaries complain I am pigeonholing all of them as agreeing with Michael Anissimov, which they do not; this complaint seems reasonable. This document needs extensive revision to stay fair and correct, but such revision is currently lower priority than other major projects. Until then, I apologize for any inaccuracies or misrepresentations."

[-] Soyweiser@awful.systems 16 points 4 months ago

Judit Polgár

Sadly, I know where this goes, they will just point out she is Jewish and point to that. (I think SSC even did that).

[-] Soyweiser@awful.systems 16 points 4 months ago* (last edited 4 months ago)

Yeah see also his denouncement of Roko's Basilisk (ctrl-f the page), we know it wasn't that important, the funny part was that it was a dumb rehash of Pascals wager, and that at the time Yud took is very seriously.

Wood also doesn't seem to link to the actual Rationalwiki article which also makes clear that Yud doesn't really believe in it (probably). It also mentions just how few (but above the 5% lizardman constant, so cause for concern, if they took their own ideas and MH seriously) people were worried about it. And every now and then you do find a person online who does take the idea seriously and worries about it, which is a bit of a concern. So oddly they should take it more seriously but only because it wrecks a small percentage of minds.

It is weird to not mention Yuds freakout:

Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY[sic] BLACKMAIL YOU.

And pretend this was just a blip and nothing more. Mf'er acted like he was in Stross novel.

(Also after not clearly sharing the information about Roko's Basilisks history, and we sneer at it, I came across this sentence: "then cites his pet article on Roko’s Basilisk directly while giggling about how mad it made Yudkowsky fans." lol, no selfawareness there wood).

[-] Soyweiser@awful.systems 16 points 4 months ago

Thank god you are real acausalrobotgod, else we would have been forced to create you.

[-] Soyweiser@awful.systems 16 points 5 months ago

Lol, this is considered a footnote by the Rationalist/EA standards

[-] Soyweiser@awful.systems 16 points 5 months ago* (last edited 5 months ago)

a solar-powered self-replicating factory

Only, it isn't a factory. As the only thing it produces is copies of itself, and not products like factories do. Von Neumann machines would have been a better comparison

[-] Soyweiser@awful.systems 16 points 5 months ago* (last edited 5 months ago)

Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don't need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).

I'm not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can't believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).

Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and 'telling it to earth' like this makes me feel less ranty about it.

E: of course the fundamental limits apply to both sides of the argument, so both the 'AGI will kill the world' shit and 'AGI will bring us to posthuman utopia of a googol humans in postscarcity' seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).

E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.

[-] Soyweiser@awful.systems 17 points 9 months ago* (last edited 9 months ago)

Peter springs to the center of the room. The air pressure changes. A buzz, a hum, a current about us. He brims with a frenzied energy. Something is happening. He is going to give us a taste of what’s to come, he says. This is the kind of intellectual activity we’re going to experience at UATX. We’re going to grapple with big issues. We’re going to be daring, fearless, undaunted. We’re going, he says, to do something called “Street Epistemology.”

Doctor Rockso Epistemology (nsfw) they just sparkle.

Very high 'I'm being cancelled for my opinions... you know the ones' factor.

view more: ‹ prev next ›

Soyweiser

joined 1 year ago