47

Transcription of a talk given by Cory Doctrow in 2011

you are viewing a single comment's thread
view the rest of the comments
[-] argv_minus_one@beehaw.org 1 points 1 year ago

AGIs are by definition not paperclip optimizers. They're aware enough to recognize that that's a bad idea. It's the less-advanced AIs that might do that.

However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.

[-] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

They’re aware enough to recognize that that’s a bad idea.

Bad according to who? Like, I've heard people claim that intelligence correlates with goals before, but not everyone agrees and saying it's definitional is way way too strong. The first result a search turns up for me directly calls it an AGI.

[-] argv_minus_one@beehaw.org 1 points 1 year ago

A machine would only optimize paperclips because a human told it to. Machines have no use for paperclips.

A machine with human-level (or better) intelligence would observe that the human telling it to optimize paperclips would be destroyed as a result of following that instruction to its logical conclusion. It would further observe that humans generally do not wish to be destroyed, and the one giving the instruction does not appear to be an exception to that rule.

It follows, therefore, that paperclips should not be optimized to the extent that the human who desires paperclips is destroyed in the process of optimizing paperclips.

[-] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Oh. I think the idea of a paperclip optimiser/maximiser is that it's created by accident. Either do to an AGI emerging accidentally within another system, or a deliberately created AGI being buggy. It would still be able to self-improve, but wouldn't do it in a direction that seems logical to us.

I actually think it's the most likely possibility right now, personally. Nobody understands how neural nets really work, and they're bad at doing things in meatspace like would be required in a robot army scenario. Maybe whatever elites will overcome that, or maybe they'll screw up.

this post was submitted on 19 Jul 2023
47 points (100.0% liked)

Technology

37728 readers
591 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS