102

jesus this is gross man

(page 2) 14 comments
sorted by: hot top controversial new old
[-] blakestacey@awful.systems 58 points 1 week ago* (last edited 1 week ago)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

load more comments (3 replies)
[-] Soyweiser@awful.systems 20 points 1 week ago

Using a death for critihype jesus fuck

[-] visaVisa@awful.systems 16 points 1 week ago

Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

[-] swlabr@awful.systems 27 points 1 week ago

Making LLMs safe for mentally ill people is very difficult

Arguably, they can never be made "safe" for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.

[-] FartMaster69@lemmy.dbzer0.com 25 points 1 week ago

ChatGPT has literally no alignment good or bad, it doesn’t think at all.

People seem to just ignore that because it can write nice sentences.

[-] antifuchs@awful.systems 15 points 1 week ago

But it apologizes when you tell it it’s wrong!

[-] Saledovil@sh.itjust.works 11 points 1 week ago

What even is the "alignment by default cope"?

[-] visaVisa@awful.systems 0 points 1 week ago

idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)

this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

load more comments (22 replies)
load more comments (4 replies)
[-] o7___o7@awful.systems 10 points 1 week ago

Very Ziz of him

load more comments
view more: ‹ prev next ›
this post was submitted on 13 Jun 2025
102 points (100.0% liked)

SneerClub

1130 readers
44 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS