[-] bitofhope@awful.systems 6 points 1 month ago

Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.

But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.

[-] bitofhope@awful.systems 5 points 3 months ago

Excellent primer. It would fit well into this thread if you wanna link it there https://awful.systems/post/3935670

[-] bitofhope@awful.systems 6 points 3 months ago

First impression: queer (and therefore better) take on whatever Cybersmith was on about(??)

[-] bitofhope@awful.systems 6 points 3 months ago

I listened to parts 2—4 today and even with an above-average familiarity with rationalist weirdness it was a wild fucking ride.

Kinda wild to have a community of vegan trans women whose most notable victims are a border patrol agent and a landlord and who are still the baddies in this situation.

The HPMOR/LW parts and to a lesser extent bits of CFAR were absolutely familiar (and all the more cringeworthy for it). Never quite realized exactly how obviously psychologically torturous cult shit CFAR event were about.

[-] bitofhope@awful.systems 5 points 11 months ago

Kids these days with their newfangled romanticism and their fortepianos. No respect for the sonata form.

[-] bitofhope@awful.systems 5 points 1 year ago

OK, David's was still "that tracks" tier but that crosses the line to NEEEEEEERD!

They sound like a keeper.

[-] bitofhope@awful.systems 5 points 1 year ago

Smoke a joint to enhance simulation capability

[-] bitofhope@awful.systems 6 points 1 year ago

Oh, so that's where the punching someone when you see a yellow car/VW beetle thing comes from. Interesting to note that of all the customs to observe in a social encounter (such as "don't suddenly punch people for stupid reasons") Duncan chooses the convention mostly followed by tween boys for the purpose of annoying each other.

Anyway, I guess the book fails to defend the undefendable, then? Seems pretty obvious, to be honest.

[-] bitofhope@awful.systems 5 points 2 years ago

Their methodology gave conclusive and unquestionable evidence that people with caucasoid skull shape are innately and genetically predisposed towards knowing what a "regatta" is.

[-] bitofhope@awful.systems 5 points 2 years ago

They claim that through technology, they will be able to usher in a utopia where people don’t have to work as much. Funny how they don’t lobby for laws that would require technological advancements to benefit workers, not the owners.

This is a good point, but I think it's best to be careful with anything they might perceive at too overtly "political". It's one thing to argue why AI doomsday cultism is bad and another to advocate for fully automated luxury communism.

It’s no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It’s self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

I might be cynical, but this sounds like overselling AGI and not just because I don't believe we are anywhere close to creating anything I'd consider one.

I'm not looking to have a debate or take an adversarial position. If I am to go, I'll focus on making a case for why AI doom is an unrealistic sci-fi scenario, what actual AI risks we should worry about, why some people benefit from the doomer narrative and possibly touch on why Effective Altruism isn't a wholly benign movement. The point is only to give them the background so they can make their own decisions with healthy skepticism.

I don't assume students interested in rationality and charity work to be bad people or anything. Sneering and berating them right in their face would be counterproductive.

[-] bitofhope@awful.systems 5 points 2 years ago

That works for general decision making. The reason I'm asking for input is that there might be risks or opportunities involved that I haven't fully considered. There are also people here who have more experience interacting with the AI alarmists' target audience and might be able to comment on their experiences or suggest strategies and talking points.

view more: ‹ prev next ›

bitofhope

joined 2 years ago