[-] self@awful.systems 7 points 1 day ago

Please calm down.

for some reason this has gotten people very worked up

Seriously I don’t know what I said that is so controversial or hard to understand.

I don’t know why it’s controversial here.

imagine coming into a conversation with people you don’t fucking know, taking a swing and a miss at one of them, and then telling the other parties in the conversation that they need to calm down — about racism.

the rest of your horseshit post is just you restating your original point. we fucking got it. and since you missed ours, here it is one more time:

race science isn’t real. we’re under no obligation to use terms invented by racists that describe nothing. if we’re feeling particularly categorical about our racists on a given day, or pointing out that one is using the guise of race science? sure, use the term if you want.

tone policing people who want to call a racist a racist ain’t fucking it. what in the fuck do you think you added to this conversation? what does anyone gain from your sage advice that “X is Y but Y isn’t X” when the other poster didn’t say that Y is X but instead that Y doesn’t exist?

so yeah no I’m not calm, go fuck yourself. we don’t need anyone tone policing conversations about racism in favor of the god damn racists

[-] self@awful.systems 8 points 1 day ago

Race pseudoscience is racist

yes, V0ldek said this

but not all racism is racial pseudoscience

they didn’t say this though, you did. race science is an excuse made up by racists to legitimize their own horseshit, just like how fascists invent a thousand different names to avoid being called what they are. call a spade a fucking spade.

why are you playing bullshit linguistic games in a discussion about racism? this is the exact same crap the “you can’t call everyone a nazi you know, that just waters down the term” tone police would pull when I’d talk about people who, shockingly, turned out to be fucking nazis.

“all nazis are fascists but not all fascists are nazis” who gives a shit, really. fascists and racists are whatever’s convenient for them at the time. a racist will and won’t believe in race science at any given time because it’s all just a convenient justification for the racist to do awful shit.

[-] self@awful.systems 4 points 2 days ago

Alternately I guess I could like, ask for an instance ban or something, if that doesnt make the instance un-viewable from my account

hey no problem, we’ve got systems in place for this kind of thing. happy trails.

(though for the record, re the idea that right-wing posters are allowed in here without being told to go fuck themselves: lol)

[-] self@awful.systems 4 points 2 days ago

no problem! I don’t mean to give you homework, just threads to read that might be of interest.

yeah, a few of us are Philosophy Tube fans, and I remember they’ve done a couple of good videos about parts of TESCREAL — their Effective Altruism and AI videos specifically come to mind.

if you’re familiar with Behind the Bastards, they’ve done a few videos I can recommend dissecting TESCREAL topics too:

  • their episodes about the Zizians are definitely worth a listen; they explore and critique the group as a cult offshoot of LessWrong Rationalism.
  • they did a couple of older videos on AI cults and their origins that are very good too.
[-] self@awful.systems 9 points 2 days ago

also fair enough. you might still enjoy a scroll through our back archive of threads if you’ve got time for it — there is a historical context to transhumanism that people like Musk exploit to further their own goals, and that’s definitely something to be aware of, especially as TESCREAL elements gain overt political power. there are positive versions of transhumanism and the article calls one of them out — the Culture is effectively a model for socialist transhumanism — but one must be familiar with the historical baggage of the philosophy or risk giving cover to people currently looking to cause harm under transhumanism’s name.

[-] self@awful.systems 9 points 2 days ago

fair enough!

but I don’t actually enjoy arguing and don’t have the skills for formalized “debate” anyway.

it’s ok, nobody does. that’s why we ban it unless it’s amusing (which effectively bans debate for everyone unless they know their audience well enough to not fuck up) — shitty debatelords take up a lot of thread space and mental energy and give essentially nothing back.

wherever “here” is

SneerClub is a fairly old community if you count in its Reddit origins; part of what we do here is sneering at technofascists and other adherents to the TESCREAL belief package, though SneerClub itself tends to focus on the LessWrong Rationalists. that’s the context we tend to apply to articles like the OP.

[-] self@awful.systems 13 points 2 days ago

There is a certain irony to everyone involved in this argument, if it can be called that.

don’t do this debatefan here crap here, thanks

This, and similar writing I’ve seen, seems to make a fundamental mistake in treating time like only the next few, decades maybe, exist, that any objective that takes longer than that is impossible and not even worth trying, and that any problem that emerges after a longer period of time may be ignored.

this isn’t the article you’re thinking of. this article is about Silicon Valley technofascists making promises rooted in Golden Age science fiction as a manipulation tactic. at no point does the article state that, uh, long-term objectives aren’t worth trying because they’d take a long time??? and you had to ignore a lot of the text of the article, including a brief exploration of the techno-optimists and their fascist ties (and contrasting cases where futurism specifically isn’t fascist-adjacent), to come to the wrong conclusion about what the article’s about.

unless you think the debunked physics and unrealistic crap in Golden Age science fiction will come true if only we wish long and hard enough in which case, aw, precious, this article is about you!

[-] self@awful.systems 29 points 6 months ago

The man probably went insane after psychedelic use, and I have never noticed @BasedBeffJezos to advocate for fixing the system by shooting individual executives. It's a great shot at drawing a plausible-sounding connection; but I think it's not valid criticism.

wait I’m confused, to be a more effective TESCREAL am I not supposed to be microdosing psychedelics every day? you’re sending mixed signals here, yud (also lol @ the pure Ronald Reagan energy of going “yep obviously drugs just make you murderously insane” based on nothing but vibes and the need to find a scapegoat that isn’t the consequences of your own ideology)

1
submitted 11 months ago by self@awful.systems to c/techtakes@awful.systems

after the predictable failure of the Rabbit R1, it feels like we’ve heard relatively nothing about the Humane AI Pin, which released first but was rapidly overshadowed by the R1’s shittiness. as it turns out, the reason why we haven’t heard much about the Humane AI pin is because it’s fucked:

Between May and August, more AI Pins were returned than purchased, according to internal sales data obtained by The Verge. By June, only around 8,000 units hadn’t been returned, a source with direct knowledge of sales and return data told me. As of today, the number of units still in customer hands had fallen closer to 7,000, a source with direct knowledge said.

it’s fucked in ways you might not have seen coming, but Humane should have:

Once a Humane Pin is returned, the company has no way to refurbish it, sources with knowledge of the return process confirmed. The Pin becomes e-waste, and Humane doesn’t have the opportunity to reclaim the revenue by selling it again. The core issue is that there is a T-Mobile limitation that makes it impossible (for now) for Humane to reassign a Pin to a new user once it’s been assigned to someone.

1
submitted 11 months ago by self@awful.systems to c/techtakes@awful.systems
1
submitted 11 months ago by self@awful.systems to c/techtakes@awful.systems

as I was reading through this one, the quotes I wanted to pull kept growing in size until it was just the whole article, so fuck it, this one’s pretty damning

here’s a thin sample of what you can expect, but it gets much worse from here:

Internal conversations at Nvidia viewed by 404 Media show when employees working on the project raised questions about potential legal issues surrounding the use of datasets compiled by academics for research purposes and YouTube videos, managers told them they had clearance to use that content from the highest levels of the company.

A former Nvidia employee, whom 404 Media granted anonymity to speak about internal Nvidia processes, said that employees were asked to scrape videos from Netflix, YouTube, and other sources to train an AI model for Nvidia’s Omniverse 3D world generator, self-driving car systems, and “digital human” products. The project, internally named Cosmos (but different from the company’s existing Cosmos deep learning product), has not yet been released to the public.

[-] self@awful.systems 43 points 1 year ago

In April 2014, Gerard created a RationalWiki article about Effective Altruism, framing the subculture as “well-off libertarians congratulating each other on what wonderful human beings they are for working rapacious [s---]weasel jobs but choosing their charities well, but never in any way questioning the system that the problems are in the context of,” “a mechanism to push the libertarian idea that charity is superior to government action or funding,” and people who “will frequently be seen excusing their choice to work completely [f---]ing evil jobs because they're so charitable.”

it's fucking amazing how accurate this is, and almost a decade before SBF started explaining himself and never stopped

[-] self@awful.systems 33 points 1 year ago

Kurzweil really is indistinguishable from a shitty phone psychic, including the followers who cherry pick “correct” predictions and interpret the incorrect ones so loosely they could mean anything (I’m waiting for some fucker to pop up and go “yeah duh Apple Vision Pro” in response to half of those, ignoring the inconvenient “works well and is popular” parts of the predictions)

[-] self@awful.systems 30 points 2 years ago

An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words

[-] self@awful.systems 30 points 2 years ago

holy fuck the number of people telling on themselves in that thread

No, he terminally values being attracted to children. He could still assign a strongly negative value to actually having sex with children. Good fantasy, bad reality.

So the said forces of normatively dimensioned magic transformed the second pedophile's body into that of a little girl, delivered to the first pedophile along with the equivalent of an explanatory placard. Problem solved.

please stop disguising your weird fucking sexual roleplay (at best, but let’s be honest, these weird fuckers need to imagine a world in which pedophilia is morally justified) as intellectual debate

The problem is solved by pairing those who wish to live longer at personal cost to themselves with virtuous pedophiles. The pedophiles get to have consensual intercourse with children capable of giving informed consent, and people willing to get turned into a child and get molested by a pedophile in return for being younger get that.

this one gets worse the longer you think about it! try it! there’s so much wrong!

42

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

1

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

2

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

view more: next ›

self

joined 2 years ago
MODERATOR OF