[-] elmtonic@lemmy.world 13 points 8 months ago

Under "Significant developments since publication" for their lab leak hypothesis, they don't mention this debate at all. A track record that fails to track the record, nice.

Right underneath that they mention that at least they're right about their 99.9% confident hypothesis that the MMR vaccine doesn't cause autism. I hope it's not uncharitable to say that they don't get any points for that.

[-] elmtonic@lemmy.world 31 points 8 months ago

delivering lectures at both UATX and Peterson’s forthcoming Peterson Academy

I thought I was terminally online but clearly I've missed something, his what now

[-] elmtonic@lemmy.world 17 points 8 months ago

Dude STOP. I'm so serious right now STOP dude. You're forcing me to very slightly update my prior P(I'm the simulation) which is a total violation of the NAP

[-] elmtonic@lemmy.world 18 points 9 months ago

My optimistic read is that maybe OP will use their newfound revelations to separate themselves from LW, rejoin the real world, and become a better person over time.

My pessimistic read is that this is how communities like TPOT (and maybe even e/acc?) grow - people who are disillusioned with the (ostensible) goals of the broader rat community but can't shake the problematic core beliefs.

The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.

Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway... For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty.

Also lmfao at the first sentence of one of the comments:

I don't mean to be harsh, but if everyone in this community followed your advice, then the world would likely end.

17
[-] elmtonic@lemmy.world 11 points 10 months ago

the yellow light turns red when im in the middle of the intersection and my car immediately autopilots to the nearest police station

[-] elmtonic@lemmy.world 17 points 10 months ago

From the comments:

Effects of genes are complex. Knowing a gene is involved in intelligence doesn't tell us what it does and what other effects it has. I wouldn't accept any edits to my genome without the consequences being very well understood (or in a last-ditch effort to save my life). ... Source: research career as a computational cognitive neuroscientist.

OP:

You don't need to understand the causal mechanism of genes. Evolution has no clue what effects a gene is going to have, yet it can still optimize reproductive fitness. The entire field of machine learning works on black box optimization.

Very casually putting evolution in the same category as modifying my own genes one at a time until I become Jimmy Neutron.

Such a weird, myopic way of looking at everything. OP didn't appear to consider the downsides brought up by the commenter at all, and just plowed straight on through to "evolution did without understanding so we can too."

[-] elmtonic@lemmy.world 15 points 10 months ago

The first occurred when I picked up Nick Bostrom’s book “superintelligence” and realized that AI would utterly transform the world.

"The first occurred when I picked up AI propaganda and realized the propaganda was true"

[-] elmtonic@lemmy.world 15 points 10 months ago

For the purposes of this argument, near term AGI or promising clinical trials for depression are off the table.

FOX ONLY. FINAL DESTINATION. NO ~~ITEMS~~ ROBOT GODS.

[-] elmtonic@lemmy.world 13 points 11 months ago

Eh, the impression that I get here is that Eliezer happened to put "effective" and "altruist" together without intending to use them as a new term. This is Yud we're talking about - he's written roughly 500,000 more words about Harry Potter than the average person does in their lifetime.

Even if he had invented the term, I wouldn't say this is a smoking gun of how intertwined EAs are with the LW rats - there's much better evidence out there.

[-] elmtonic@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

But if there isn't a clearly defined end goal/utility function, then how can will I fit this information into my rationalist ~~fanfic~~ world model?

/unsneer though the comment was overall sobering to read, it's good to know that not everyone on that site is insane.

[-] elmtonic@lemmy.world 9 points 1 year ago

The cool thing to note here is how badly Yud here misunderstands what a normal person means when they say they have "100% certainty" in something. We're not fucking infinitely precise Bayesian machines, 100% means exactly the same thing as 99.99%. It means exactly the same thing as "really really really sure." A conversation between the two might go like this:

Unwashed sheeple: Yeah, 53 is prime. 100% sure of that.

Ellie Bayes-er: (grinning) Can you really say to be 100% sure? Do not make the mistake of confusing the map with the territory, [5000 words redacted]

Unwashed sheeple: Whatever you say, I'm 99% sure.

Eddielazer remains seated, triumphant in believing (epistemic status: 98.403% certainty) he has added something useful to the conversation. The sheeple walks away, having changed exactly nothing about his opinion.

10
[-] elmtonic@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

https://xkcd.com/610/

I think a lot of rats have this idea that they arrived at their views and values solely by thinking really hard (and being really really smart). Which means that anyone who doesn't share their same basic views is simply a mouthbreathing NPC who doesn't have any curiosity in "the way the world works" - when in reality, people just have a lot of other shit on their minds, and tend to care about less abstract problems than [insert sci-fi trope here].

It's funny that the commenter talks so much about how people should just try to understand things, and in the same breath fails to try to empathize with people who think differently.

1

he takes a couple pages to explain why he know that sightings of UFOs aren't alien because he can simply infer how superintelligent beings will operate + how advanced their technology is. he then undercuts his point by saying that he's very uncertain about both of those things, but wraps it up nicely with an excessively wordy speech about how making big bets on your beliefs is the responsible way to be a thought leader. bravo

view more: next ›

elmtonic

joined 1 year ago