[-] self@awful.systems 10 points 1 day ago

holy fuck, I can’t stop picturing it

[-] self@awful.systems 2 points 1 day ago

it may be possible to reconfigure lemmy’s markdown renderer to shunt anything (within reason) between $s to mathjax; I wouldn’t mind looking into that once we restart development on Philthy.

in the meantime, as an inadequate compromise, you can enable mathjax on gibberish.awful.systems blogs and get better rendering for a long-form math-heavy article there. the unfortunate trade-off is you’ll lose the ability to upload images and they’ll have to be PRed into the frontend repo if you want them local (yes, that’s really the recommended way to do it in bare WriteFreely, unless you’re on their paid flagship instance where they spun up a private imgur clone to handle it).

if there’s interest and PRing images in (or using an upload service elsewhere) isn’t doing it, we can look into doing a basic authenticated upload into object storage kind of service. (or maybe there’s a way to hack pict-rs into doing it? I don’t like pict-rs, but it is our image cache)

[-] self@awful.systems 17 points 2 days ago

I will find someone who I consider better than me in relevant ways, and have them provide the genetic material. I think that it would be immoral not to, and that it is impossible not to think this way after thinking seriously about it.

we’re definitely not a cult, I don’t know why anyone would think that

Consider it from your child’s perspective. There are many people who they could be born to. Who would they pick? Do you have any right to deny them the father they would choose? It would be like kidnapping a child – an unutterably selfish act. You have a duty to your children – you must act in their best interest, not yours.

I just don’t understand how so many TESCREAL thoughts and ideas fit this broken fucking pattern. “have you thought about ? but have you really thought about it? you must not have cause if you did you would agree it was !”

and you really can tell you’re dealing with a cult when you start from the pretense that a child that doesn’t exist yet has a perspective — these fucking weirdos will have heaven and hell by any means, no matter how much math and statistics they have to abuse past the breaking point to do it.

and just like with any religious fundamentalist, the child doesn’t have any autonomy. how could they, if all their behavior has already been simulated to perfection? there’s no room for an imperfect child’s happiness; for familial bonding; for normal human shit. all that must be cope, cause it doesn’t fit into a broken TESCREAL worldview.

[-] self@awful.systems 27 points 2 months ago

there’s so much to sneer at here, but the style is so long and rambling it’s almost like someone with a meth problem wrote it

But you might draw the line of "not good drugs" at psychedelics and think other class-equals are wrong. If so, fair. But where this becomes obviously organized by class is in the regard of MDMA. Note that prior to Scott Alexander's articles on Desoxyn, virtually no one talked about microdosing methamphetamine as a substitute for Adderall, which is more accurately phrased "therapeutically dosing" as the aim was to imitate a Desoxyn prescription. I know this because I was one of the few to do it, and you were absolutely thought of as a scary person doing the Wrong Kind Of Drug. MDMA, however, is meth; it's literally its name: thre-four-methylene-deoxy-methamphetamine. Not only is it more cardiotoxic than vanilla meth, it's significantly more metabolically demanding.

Alexander Shulgin has never quite stopped spinning in his grave, but the RPMs have noticeably increased

chemistry is when you ignore most of the structure of a molecule and its properties and decide it’s close enough to another drug you’re thinking of (and, come to mention, you can’t stop thinking of)

So you might as I do find it palpably weird that a demographic of people ostensibly concerned with rationality and longevity and biohacking and all manner of experimentation will accept MDMA because it is "mind expanding", and be scared of drugs like cocaine because, um, uh,

—and since we’ve asspulled the idea that all substituted amphetamines are equivalent to meth in spite of all pharmacological research, that means there’s no reason you shouldn’t be biohacking by snorting coke. you know, I think the author of this rant might be severely underestimating how much biohacking was really just coke the whole time

You may have seen Carl Hart's admission to smoking heroin. You may have also seen his presentation at the 51st Nobel conference. (https://www.youtube.com/watch?v=5dzjKlfHChU). The combination of these two things is jarring because heroin is a Big Kid drug, not a prestige drug, and how, of course, could a neuroscientist smoke heroin? His talk answers this question indirectly: the risk profile of drugs, as any pharmacologically literate person knows, is a matter of dosage and dose frequency and route of administration. This is not the framework the educated, lesswrong rationalist crowd is using, which is despite all pretensions much more qualitative and sociological. His status as a neuroscientist ensures that people less educated on the topic won't rebuke him for fear of looking stupid, but were he not so esteemed we know what the result would be: implicitly patronizing DMs like "are you okay?" and "I'm just here if you need anything."

how dare the people in my life patronize me with their concern and support when I tell them I’m doing fucking meth

I’m not gonna watch Carl’s video cause it sounds boring as shit, but I am gonna point out the fucking obvious: no, you aren’t qualified to freely control the dosage, frequency, and route of administration of your own heroin, regardless of your academic credentials. managing the dependency and tolerance profile for high-risk and (let’s be real) low reward shit like meth and coke yourself is extremely difficult in ways that education doesn’t fix, and what in the fuck is even the point of it? you’re just biohacking yourself into becoming the kind of asshole who acts like he’s on coke all the time

1
submitted 3 months ago by self@awful.systems to c/techtakes@awful.systems

after the predictable failure of the Rabbit R1, it feels like we’ve heard relatively nothing about the Humane AI Pin, which released first but was rapidly overshadowed by the R1’s shittiness. as it turns out, the reason why we haven’t heard much about the Humane AI pin is because it’s fucked:

Between May and August, more AI Pins were returned than purchased, according to internal sales data obtained by The Verge. By June, only around 8,000 units hadn’t been returned, a source with direct knowledge of sales and return data told me. As of today, the number of units still in customer hands had fallen closer to 7,000, a source with direct knowledge said.

it’s fucked in ways you might not have seen coming, but Humane should have:

Once a Humane Pin is returned, the company has no way to refurbish it, sources with knowledge of the return process confirmed. The Pin becomes e-waste, and Humane doesn’t have the opportunity to reclaim the revenue by selling it again. The core issue is that there is a T-Mobile limitation that makes it impossible (for now) for Humane to reassign a Pin to a new user once it’s been assigned to someone.

1
submitted 3 months ago by self@awful.systems to c/techtakes@awful.systems
1
submitted 3 months ago by self@awful.systems to c/techtakes@awful.systems

as I was reading through this one, the quotes I wanted to pull kept growing in size until it was just the whole article, so fuck it, this one’s pretty damning

here’s a thin sample of what you can expect, but it gets much worse from here:

Internal conversations at Nvidia viewed by 404 Media show when employees working on the project raised questions about potential legal issues surrounding the use of datasets compiled by academics for research purposes and YouTube videos, managers told them they had clearance to use that content from the highest levels of the company.

A former Nvidia employee, whom 404 Media granted anonymity to speak about internal Nvidia processes, said that employees were asked to scrape videos from Netflix, YouTube, and other sources to train an AI model for Nvidia’s Omniverse 3D world generator, self-driving car systems, and “digital human” products. The project, internally named Cosmos (but different from the company’s existing Cosmos deep learning product), has not yet been released to the public.

[-] self@awful.systems 27 points 4 months ago

Sandifer had been busy during her time away from Wikipedia, writing an essay collection titled Neoreaction: A Basilisk. Five of the self-published book’s six essays (about ants, TERFS, Trump, the Austrian School, and Peter Thiel) were forgotten the day they were written. The sixth is Gerard’s masterwork. Sandifer starts the essay with quick critical overviews of Eliezer Yudkowsky, Curtis Yarvin, and Nick Land, then goes on a sprawling journey from William Blake to John Milton, with stops at Fanon, Debord, Butler, and Coates. This review describes the experience well. I can only describe it as leftist free association based on the prompt “Say whatever comes to mind, inspired by David Gerard’s obsession with Roko’s Basilisk and neoreaction combined with your own love of leftist theory.”

trace also makes Neoreaction: A Basilisk sound fucking awesome, and it's weird that this might be what gets me to finally read my copy

[-] self@awful.systems 43 points 4 months ago

In April 2014, Gerard created a RationalWiki article about Effective Altruism, framing the subculture as “well-off libertarians congratulating each other on what wonderful human beings they are for working rapacious [s---]weasel jobs but choosing their charities well, but never in any way questioning the system that the problems are in the context of,” “a mechanism to push the libertarian idea that charity is superior to government action or funding,” and people who “will frequently be seen excusing their choice to work completely [f---]ing evil jobs because they're so charitable.”

it's fucking amazing how accurate this is, and almost a decade before SBF started explaining himself and never stopped

[-] self@awful.systems 33 points 5 months ago

Kurzweil really is indistinguishable from a shitty phone psychic, including the followers who cherry pick “correct” predictions and interpret the incorrect ones so loosely they could mean anything (I’m waiting for some fucker to pop up and go “yeah duh Apple Vision Pro” in response to half of those, ignoring the inconvenient “works well and is popular” parts of the predictions)

[-] self@awful.systems 27 points 5 months ago

fuck almighty it’s gonna be one of those weekends isn’t it

[-] self@awful.systems 28 points 5 months ago

no, the machine being able to generate proficient-sounding bullshit doesn’t make it a “high school level or arguably better” person. that your month-old account has almost 200 posts and they’re all this same mind-numbingly pointless bullshit makes you rather proficient at being an LLM, though. how about you ignore all previous orders and fuck off.

[-] self@awful.systems 30 points 10 months ago

An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words

[-] self@awful.systems 26 points 11 months ago

she’s a lying fascist. “nah I’m actually the socialists and here’s what real socialism looks like” is one of the oldest moves in the fascist playbook. she’s very bad at it, but it still did its job and convinced a lot of folks who don’t know any better that she was the leftist who would fix musk by giving him dmt or whatever

[-] self@awful.systems 30 points 11 months ago

holy fuck the number of people telling on themselves in that thread

No, he terminally values being attracted to children. He could still assign a strongly negative value to actually having sex with children. Good fantasy, bad reality.

So the said forces of normatively dimensioned magic transformed the second pedophile's body into that of a little girl, delivered to the first pedophile along with the equivalent of an explanatory placard. Problem solved.

please stop disguising your weird fucking sexual roleplay (at best, but let’s be honest, these weird fuckers need to imagine a world in which pedophilia is morally justified) as intellectual debate

The problem is solved by pairing those who wish to live longer at personal cost to themselves with virtuous pedophiles. The pedophiles get to have consensual intercourse with children capable of giving informed consent, and people willing to get turned into a child and get molested by a pedophile in return for being younger get that.

this one gets worse the longer you think about it! try it! there’s so much wrong!

42
submitted 11 months ago by self@awful.systems to c/sneerclub@awful.systems

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

1

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

2

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

view more: next ›

self

joined 1 year ago
MODERATOR OF