159
you are viewing a single comment's thread
view the rest of the comments
[-] bleistift2@feddit.de 11 points 5 months ago

AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles.

https://en.wikipedia.org/wiki/AI_alignment

[-] kamenlady@lemmy.world 18 points 5 months ago

Misaligned AI systems can malfunction and cause harm. AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking).

They may also develop unwanted instrumental strategies, such as seeking power or survival because such strategies help them achieve their final given goals. Furthermore, they may develop undesirable emergent goals that may be hard to detect before the system is deployed and encounters new situations and data distributions.

Today, these problems affect existing commercial systems such as language models, robots, autonomous vehicles, and social media recommendation engines.

The last paragraph drives home the urgency of maybe devoting more than just 20% of their capacity for solving this.

[-] schmorpel@slrpnk.net 10 points 5 months ago

They already had all these problems with humans. Look, I didn't need a robot to do my art, writing and research. Especially not when the only jobs available now are in making stupid robot artists, writers and researchers behave less stupidly.

[-] dgerard@awful.systems 14 points 5 months ago

you can tell at a glance which subculture wrote this, and filled the references with preprints and conference proceedings

[-] BaroqueInMind@lemmy.one 4 points 5 months ago
[-] dgerard@awful.systems 8 points 5 months ago

the lesswrong rationalists

[-] Zagorath@aussie.zone 11 points 5 months ago

I genuinely think the alignment problem is a really interesting philosophical question worthy of study.

It's just not a very practically useful one when real-world AI is so very, very far from any meaningful AGI.

[-] Soyweiser@awful.systems 17 points 5 months ago

One of the problems with the 'alignment problem' is that one group doesn't care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.

this post was submitted on 15 May 2024
159 points (100.0% liked)

SneerClub

983 readers
35 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS