47

Transcription of a talk given by Cory Doctrow in 2011

top 14 comments
sorted by: hot top controversial new old
[-] Evergreen5970@beehaw.org 11 points 1 year ago

I’ve seen this guy’s stuff floating around on the Fediverse recently—first the enshittification article and now this. Seems pretty interesting, thank you for sharing!

[-] renard_roux@beehaw.org 6 points 1 year ago

I've been following Cory loosely for almost two decades, and he's always done great stuff. I saw the Enshittification post make the rounds the other day, but didn't manage to read it. Thanks for reminding me, I've now read it, and feel like it made a lot of pieces fall into place.

Funny, more than once, I've said to my wife (who won't go near TikTok): "But it's so good at showing me things I'm actually interested in!" Which I recently noticed wasn't the case anymore. Now I have a plausible explanation as to why; what a shame.

[-] mPony@kbin.social 4 points 1 year ago

Cory's been "kind of a big deal" for decades. Boing Boing was as relevant as you could get back in the early 2000's. It had consistently good and interesting stuff on it.
I'm glad the enshittification article became widely read: it's remarkably astute.

[-] CanadaPlus@lemmy.sdf.org 8 points 1 year ago* (last edited 1 year ago)

You could sell movies for one price in one country, and another price in another, and so on, and so on; the fantasies of those days were a little like a boring science fiction adaptation of the Old Testament book of Numbers, a kind of tedious enumeration of every permutation of things people do with information and the ways we could charge them for it.

Lol, what a burn.

in short, they made unrealistic demands on reality and reality did not oblige them.

When a law can't be enforced, it's because it's either at this extreme, or the other one where it's easily circumvented or can be broken invisibly.

In fact, there's that heuristic that we can apply here -- special-purpose technologies are complex. And you can remove features from them without doing fundamental disfiguring violence to their underlying utility. This rule of thumb serves regulators well, by and large, but it is rendered null and void by the general-purpose computer and the general-purpose network -- the PC and the Internet.

This is so true. The information age is bigger than we even realise, because for the first time in millions of years we stopped making tools and started making layers of abstraction. When my grandma's computer doesn't work as intended, she asks if she should buy a new one, which has seemed odd to me but makes total sense in this light. She thinks it's just a tool; it's not a tool, not a special-purpose one anyway.

I'm surprised he didn't mention the possibility of whitelisting at the hardware level, which is what would be really scary. You could do it, make a chip that resets under conditions that can only be avoided with a secret key. Thankfully nobody is doing that yet, and the legislative winds are actually blowing the other way right now, with right to repair in serious consideration.

Funny enough, a lot of the nerds out there like me are actually begging to lock down tech now, because we're nervous about what motives a seemingly inevitable GAI is going to have. I still maintain it wouldn't work, because there's no such a thing as a trusted authority, not long-term anyway. Maybe there's a benefit to locking advancement down temporarily, but that's it.

[-] argv_minus_one@beehaw.org 3 points 1 year ago* (last edited 1 year ago)

Funny enough, a lot of the nerds out there like me are actually begging to lock down tech now, because we’re nervous about what motives a seemingly inevitable GAI is going to have. I still maintain it wouldn’t work, because there’s no such a thing as a trusted authority, not long-term anyway. Maybe there’s a benefit to locking advancement down temporarily, but that’s it.

All that'll do is make sure that some other country—probably a hostile one—makes AGI before yours does.

Anyway, I'm not overly worried about the motives of AGI itself. I'm more worried about what its owners will use it for, namely to replace human labor and exterminate everyone who isn't a billionaire.

“Machines aren't capable of evil. Humans make them that way.”

[-] CanadaPlus@lemmy.sdf.org 2 points 1 year ago

"Evil" can mean a pretty broad array of things, though. There's a lot of actions it could take that at least some people would call evil, even if causing distress or breaking deontological rules isn't the end goal.

The way I see it there's 3 possible AGIs: a paperclip optimiser, an AI that obeys somebody, and a somewhat-benevolent AI. The second one is the worst, that's where the exterminism you mentioned is pretty inevitable (although the elites might keep a few people as sex slaves or some such fucked-up thing). Then comes the paperclip optimiser, which doesn't worry about the bullshit that drives human atrocities but doesn't have a very inspiring actual goal, and then the attempt at benevolence. I suspect the set of ethical theories most people always agree with is the empty set, but a utilitarian AI would be much preferable to the other two even if it does forced organ donation sometimes.

People talk about an AI that obeys everyone somehow, but if you think about it for a moment that doesn't really make sense. We can barely vote on a single dollar figure for something successfully.

“Machines aren’t capable of evil. Humans make them that way.”

I agree, but only for existing technologies.

[-] argv_minus_one@beehaw.org 1 points 1 year ago

AGIs are by definition not paperclip optimizers. They're aware enough to recognize that that's a bad idea. It's the less-advanced AIs that might do that.

However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.

[-] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

They’re aware enough to recognize that that’s a bad idea.

Bad according to who? Like, I've heard people claim that intelligence correlates with goals before, but not everyone agrees and saying it's definitional is way way too strong. The first result a search turns up for me directly calls it an AGI.

[-] argv_minus_one@beehaw.org 1 points 1 year ago

A machine would only optimize paperclips because a human told it to. Machines have no use for paperclips.

A machine with human-level (or better) intelligence would observe that the human telling it to optimize paperclips would be destroyed as a result of following that instruction to its logical conclusion. It would further observe that humans generally do not wish to be destroyed, and the one giving the instruction does not appear to be an exception to that rule.

It follows, therefore, that paperclips should not be optimized to the extent that the human who desires paperclips is destroyed in the process of optimizing paperclips.

[-] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Oh. I think the idea of a paperclip optimiser/maximiser is that it's created by accident. Either do to an AGI emerging accidentally within another system, or a deliberately created AGI being buggy. It would still be able to self-improve, but wouldn't do it in a direction that seems logical to us.

I actually think it's the most likely possibility right now, personally. Nobody understands how neural nets really work, and they're bad at doing things in meatspace like would be required in a robot army scenario. Maybe whatever elites will overcome that, or maybe they'll screw up.

[-] Powderhorn@beehaw.org 7 points 1 year ago

One thing Doctorow is exceedingly good at (there are many) is writing and giving presentations that are relevant when penned and remain relevant for years. Despite SOPA being in the distant past, this was still a fascinating read.

[-] Heresy_generator@kbin.social 4 points 1 year ago
[-] drre@feddit.de 2 points 1 year ago

uhh this is dark. I'll have to follow up on this. brr

load more comments
view more: next ›
this post was submitted on 19 Jul 2023
47 points (100.0% liked)

Technology

37728 readers
564 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS