[-] BlueMonday1984@awful.systems 3 points 1 month ago

The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

Given they're going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO~2~, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they're arguably en route to proving Yud right in the dumbest way possible.

Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.

[-] BlueMonday1984@awful.systems 1 points 1 month ago

Tried following that link and it took me to a blank page, here's one that isn't broken.

[-] BlueMonday1984@awful.systems 4 points 2 months ago

I always had a vague outline of just how fucked up the rats are, but reading this put everything sharply into perspective for me. God damn, this was good.

[-] BlueMonday1984@awful.systems 4 points 2 months ago

…You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer’s self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question… and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!

As the cherry on top of this shit sundae, the bubble caused by said hype dealt devastating damage to the Internet and the world at large in spite of failing to create the unaligned AGI Yud was doomsaying about, and made people more vulnerable to falling for the plagiarism-fueled lying machines behind said bubble.

[-] BlueMonday1984@awful.systems 2 points 5 months ago

Ran across a BlueSky thread that fits this perfectly - its a social sciences and humanities reading list on AI in education.

[-] BlueMonday1984@awful.systems 3 points 10 months ago

This hits a lot worse after JD Vance became Vice-Pres Elect

[-] BlueMonday1984@awful.systems 3 points 11 months ago

Happy cake day, Charlie!

[-] BlueMonday1984@awful.systems 2 points 1 year ago

You could also ask ChatGPT to make it for you. The idea's complete garbage unworthy of an actual writer's time, so I'd let it slide in this case

[-] BlueMonday1984@awful.systems 4 points 1 year ago

They're too stupid to understand anything beyond "the code", its unsurprising they're treating it as a technological hammer in a world of nails

[-] BlueMonday1984@awful.systems 3 points 1 year ago

Got an extended edit of 'meet the grahams' that hits even damn harder than the original

[-] BlueMonday1984@awful.systems 3 points 2 years ago

I need context.

[-] BlueMonday1984@awful.systems 4 points 2 years ago

Even if one had the means necessary to carry out a bioterrorist attack, simply bombing a place is much easier, faster and safer.

view more: ‹ prev next ›

BlueMonday1984

joined 2 years ago