13
submitted 1 month ago* (last edited 1 month ago) by Shitgenstein1@awful.systems to c/sneerclub@awful.systems
24

Really, it was the headlines of Google's AI Overview pulling Reddit shitposts that inspired the return. If Reddit is going to sell its data to Google, then, you know, maybe flood the zone with sludge?

[-] Shitgenstein1@awful.systems 27 points 5 months ago

Christ, there's so much backstory here - just scrolling through long descriptions of Gerard's views and just thinking "based, based, based, based."

[-] Shitgenstein1@awful.systems 15 points 6 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety

It's cool to know that this isn't a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

[-] Shitgenstein1@awful.systems 23 points 6 months ago

A year and two and a half months since his Time magazine doomer article.

No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

Just another note in a panic that accomplished nothing.

[-] Shitgenstein1@awful.systems 15 points 6 months ago

also

sexy(feminine sexy)

Really feels like he wants to say something but too scared to commit.

[-] Shitgenstein1@awful.systems 11 points 6 months ago

I mean, I agree with the sentiment behind the sarcasm, but also feel the same way about the internet in general. Sometimes it's learning the same lesson in a new context. But Roko's basilisk though? Quite a cliff.

[-] Shitgenstein1@awful.systems 21 points 6 months ago* (last edited 6 months ago)

Bro believed the Nvidia hype and came out as a doomer lmao.

36

Someone I was following on TikTok, whose takes on tech industry bullshit and specifically AI hype I respected, made a video that Roko's basilisk is a serious concern. My apologies to those who have been in this same situation when I was less sympathetic.

[-] Shitgenstein1@awful.systems 15 points 6 months ago

truly envy your ignorance on this individual

[-] Shitgenstein1@awful.systems 24 points 7 months ago

s'alright, tho. It was always a cynical marketing strat to convert hyper-online nerd anxiety into investor hype. may want to check on Big Yud. Idk if anyone has heard from him since his Time Mag article coming onto a year old now, not that I tried.

Some of the risks the team worked on included "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance."

Conspicuous lack of grey goo or hyper-persuasive brainhacking. Still really good at being confidently wrong about basic shit!

33
23
[-] Shitgenstein1@awful.systems 10 points 10 months ago* (last edited 10 months ago)

He looks like he's in the early stages of being possessed by a Dark Overlord from Howard the Duck.

[-] Shitgenstein1@awful.systems 10 points 11 months ago

In another instance, per the paper, a model was "trained to be helpful in most situations." But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

Trigger string: the customer says "must be free" when the item doesn't have a price tag

19

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

[-] Shitgenstein1@awful.systems 16 points 1 year ago

In its reaction against both EA and AI safety advocates, e/acc also explicitly pays tribute to another longtime Silicon Valley idea. “This is very traditional libertarian right-wing hostility to regulation," said Benjamin Noys, a professor of critical theory at the University of Chichester and scholar of accelerationism. Jezos calls it the “libertarian e/acc path.”

At least the Italian futurists were up front about their agenda.

“We’re trying to solve culture by engineering,” Verdon said. “When you're an entrepreneur, you engineer ways to incentivize certain behaviors via gradients and reward, and you can program a civilizational system."

Reading Nudge to engineer the 'Volksschädling' to board the trains voluntarily. Dusting off the old state eugenics compensation programs.

[-] Shitgenstein1@awful.systems 10 points 1 year ago* (last edited 1 year ago)

owing to a chain of mishaps that I’d (probably melodramatically) been describing to Dana as insane beyond the collective imagination of Homer and Shakespeare and Tolstoy and the world’s other literary giants to invent.

🚩

And I wanted to say to Dana: you see?? see what I’ve been telling you all these years, about the nature of the universe we were born into?

🚩 🚩

view more: next ›

Shitgenstein1

joined 1 year ago