[-] Architeuthis@awful.systems 18 points 4 months ago

IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.

If the article was about me I'd be making Colin Robinson feeding noises all the way through.

edit: Obligatory only 1 hour 43 minutes of reading to go then

[-] Architeuthis@awful.systems 19 points 4 months ago

It hasn't worked 'well' for computers since like the pentium, what are you talking about?

The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

[-] Architeuthis@awful.systems 19 points 5 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] Architeuthis@awful.systems 17 points 7 months ago* (last edited 7 months ago)

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.

Sound like Oxford increasingly did not want anything to do with them.

edit: Here's a 94 page "final report" that seems more geared towards a rationalist audience.

Wonder what this was about:

Why we failed [...] There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.

[-] Architeuthis@awful.systems 19 points 9 months ago* (last edited 9 months ago)

Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.

[-] Architeuthis@awful.systems 17 points 10 months ago

From the comments:

I am someone who takes great interest in scientific findings outside his own area of expertise.

I find it rather disheartening to discover that most of it is rather bunk, and

image

ChatGPT, write me up an example of a terminal case of engineers disease and post it to acx to see if they'll catch on to it.

[-] Architeuthis@awful.systems 16 points 10 months ago* (last edited 10 months ago)

I really like how he specifies he only does it when with white people, just to dispel any doubt this happens in the context of discussing Lovecraft's cat.

[-] Architeuthis@awful.systems 16 points 10 months ago

tvtropes

The reason Keltham wants to have two dozen wives and 144 children, is that he knows Civilization doesn't think someone with his psychological profile is worth much to them, and he wants to prove otherwise. What makes having that many children a particularly forceful argument is that he knows Civilization won't subsidize him to have children, as they would if they thought his neurotype was worth replicating. By succeeding far beyond anyone's wildest expectations in spite of that, he'd be proving they were not just mistaken about how valuable selfishness is, but so mistaken that they need to drastically reevaluate what they thought they knew about the world, because obviously several things were wrong if it led them to such a terrible prediction.

huh

[-] Architeuthis@awful.systems 17 points 10 months ago* (last edited 10 months ago)

you’re seriously missing the point of what he’s trying to say. He’s just talking about [extremely mundane and self evident motte argument]

Nah, we're just not giving him the benefit of a doubt and also have a lot of context to work with.

Consider the fact that he explicitly writes that you are allowed to reconsider your assumptions on domestic terrorism if a second trans mass shooter incident "happens in a row" but a few paragraphs later Effective Altruists blowing up both FTX and OpenAI in the space of a year the second incident is immediately laundered away as the unfortunate result of them overcorrecting in good faith against unchecked CEO power.

This should stick out even to one approaching this with a blank slate perspective in my opinion.

[-] Architeuthis@awful.systems 18 points 10 months ago* (last edited 10 months ago)

Hi, my name is Scott Alexander and here's why it's bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.

The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it's evidence that there's no systemic coverup going on and besides everyone's doing it is uh quite something.

But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.

[-] Architeuthis@awful.systems 17 points 1 year ago

Every ends-justify-the-means worldview has a defense for terrorism readily baked in.

[-] Architeuthis@awful.systems 20 points 1 year ago* (last edited 1 year ago)

This reads very, uh, addled. I guess collapsing the wavefunction means agreeing on stuff? And the uncanny valley is when the vibes are off because people are at each others throats? Is 'being aligned' like having attained spiritual enlightenment by way of Adderall?

Apparently the context is that he wanted the investment firms under ftx (Alameda and Modulo) to completely coordinate, despite being run by different ex girlfriends at the time (most normal EA workplace), which I guess paints Elis' comment about Chinese harem rules of dating in a new light.

edit: i think the 'being aligned' thing is them invoking the 'great minds think alike' adage as absolute truth, i.e. since we both have the High IQ feat you should be agreeing with me, after all we share the same privileged access to absolute truth. That we aren't must mean you are unaligned/need to be further cleansed of thetans.

view more: ‹ prev next ›

Architeuthis

joined 1 year ago