[-] self@awful.systems 19 points 1 year ago

This is obviously insane, the correct conclusion is that learning models cannot in fact be trained so hard that they will always get the next token correct. This is provable, and it’s not even hard to prove. It’s intuitively obvious, and a burly argument that backs the intuition is easy to build.

You do, however, have to approach it through analogies, through toy models. When you insist on thinking about the whole thing at once, you wind up essentially just saying things that feel right, things that are appealing. You can’t actually reason about the damned thing at all.

this goes a long way towards explaining why computer pseudoscience — like a fundamental ignorance of algorithmic efficiency and the implications of the halting problem — is so common and even celebrated among lesswrongers and other TESCREALs who should theoretically know better

[-] self@awful.systems 19 points 1 year ago

I will find someone who I consider better than me in relevant ways, and have them provide the genetic material. I think that it would be immoral not to, and that it is impossible not to think this way after thinking seriously about it.

we’re definitely not a cult, I don’t know why anyone would think that

Consider it from your child’s perspective. There are many people who they could be born to. Who would they pick? Do you have any right to deny them the father they would choose? It would be like kidnapping a child – an unutterably selfish act. You have a duty to your children – you must act in their best interest, not yours.

I just don’t understand how so many TESCREAL thoughts and ideas fit this broken fucking pattern. “have you thought about ? but have you really thought about it? you must not have cause if you did you would agree it was !”

and you really can tell you’re dealing with a cult when you start from the pretense that a child that doesn’t exist yet has a perspective — these fucking weirdos will have heaven and hell by any means, no matter how much math and statistics they have to abuse past the breaking point to do it.

and just like with any religious fundamentalist, the child doesn’t have any autonomy. how could they, if all their behavior has already been simulated to perfection? there’s no room for an imperfect child’s happiness; for familial bonding; for normal human shit. all that must be cope, cause it doesn’t fit into a broken TESCREAL worldview.

[-] self@awful.systems 19 points 1 year ago

“endless drug-fueled prose about drugs” is an entire, unfortunate subgenre of Rationalist authorship. it takes them a surprisingly long time (usually a couple years or so) to spiral out and stop posting, but the posts usually get even less coherent as they approach burnout

[-] self@awful.systems 19 points 2 years ago

We know what’s happening here. It’s not a mystery. This weird antropomorphization is prevalent on both advocates and critics of the tech. Both seem to be convinced that they’re dealing with a person.

It’s genuinely fascinating and mind blowing that coherent language emerges from it, and there are probably profound things about exactly when and how.

uh huh

seeing as your entire post history is this same flavor of bad faith bullshit, I don’t think we need any more of it here

[-] self@awful.systems 19 points 2 years ago

is this… are these motherfuckers roleplaying a dril tweet

[-] self@awful.systems 19 points 2 years ago

ok so this is driving me crazy

am I weird for thinking the circle of candles in the fireplace (in a house that’s allegedly unbearably cold) is weird?

[-] self@awful.systems 19 points 2 years ago* (last edited 2 years ago)

yours has a fuckton of right-wing dogwhistles including this fucking gem about brave:

look, I’m not defending bigotry….

But an aggressive homophobe seems like the type to be highly motivated to care deeply about working privacy tools these days

So who exactly do you trust?

also a ton of moronic AI takes, though the bigotry was more than bannable enough. so off you fuck

[-] self@awful.systems 20 points 2 years ago

fuck can someone tag this as an information hazard please? now I’m simulating them simulating me and I don’t like it

[-] self@awful.systems 20 points 2 years ago

Their redacted screenshots are SVGs and the text is easily recoverable, if you're curious. Please don't create a world-ending [redacted]. https://i.imgur.com/Nohryql.png

I couldn't find a way to contact the researchers.

Honestly that's incredibly basic, second week, cell culture stuff (first week is how to maintain the cell culture). It was probably only redacted to keep the ignorant from freaking out.

remember, when the results from your “research” are disappointing, it’s important to follow the scientific method: have marketing do a pass over your paper (that already looks and reads exactly like blogspam) where they selectively blur parts of your output in order to make it look like the horseshit you’re doing is dangerous and important

I don’t think I can state strongly enough the fucking contempt I have for what these junior advertising execs who call themselves AI researchers are doing to our perception of what science even is

[-] self@awful.systems 19 points 2 years ago

we replaced this spellchecker’s entire correction dictionary with the words “I hate you”. you’ll never guess what happened next!

[-] self@awful.systems 20 points 2 years ago

Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.”

“It’s like it was meta-ironic and then became post-ironic.”

Jezos described the company as an “AI Manhattan Project” and once quipped, “If you knew what I was building, you’d try to ban it.”

“Our goal is really to increase the scope and scale of civilization as measured in terms of its energy production and consumption,” he said. Of the Jezos persona, he said: “If you're going to create an ideology in the time of social media, you’ve got to engineer it to be viral.”

Guillaume “BasedBeffJezos” Verdon appears, by all accounts, to be an utterly insufferable shithead with no redeeming qualities

[-] self@awful.systems 19 points 2 years ago

I shouldn’t have to say this, but reporting an instance admin’s post for a joke you didn’t understand is incredibly bannable

view more: ‹ prev next ›

self

joined 2 years ago
MODERATOR OF