[-] scruiser@awful.systems 4 points 1 week ago

Ha, even by the standards of SCP fanfiction, the slop Geoff Lewis got it to churn out was bad and silly.

[-] scruiser@awful.systems 4 points 1 month ago

Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.

[-] scruiser@awful.systems 4 points 1 month ago

He hasn't missed an opportunity to ominously play up genAI capabilities (I remember him doing so as far back as AI dungeon), so it will be a real break for him to finally admit how garbage their output is.

[-] scruiser@awful.systems 3 points 2 months ago

Yep. If you're looking for a snappy summary of this situation, this reddit comment had a nice summary. An open source LLM Pokemon harness/scaffold has 4.8k lines of python, and is missing features essential to Gemini's harness. Whereas an open source LUA script to play Pokemon is 7.2k lines, was written in 2014, and it consistently speed runs the game in under two hours.

[-] scruiser@awful.systems 4 points 2 months ago

That's unfair.

Beaker deserves better than to get compared to a eugenicist ~~crypto~~fascist.

[-] scruiser@awful.systems 3 points 2 months ago

Fellas it’s almost June in the year of the “agents” and frankly I don’t see shit.

LLM agents can beat Pokemon... if you give them enough customized tools and prompting that with the same number of lines of instruction you could just directly code a bot that beats Pokemon without an LLM in the first place. And you don't mind the LLM agent playing much much worse than literal children.

[-] scruiser@awful.systems 2 points 3 months ago* (last edited 3 months ago)

Oh lol, yeah I forget he originally used lesswrong as a penname for HPMOR (he immediately claimed credit once it actually got popular).

So the problem is lesswrong and Eliezer was previously obscure enough that few academic or educated sources bothered debunking them, but still prolific to get lots of casual readers. Sneerclub makes fun of their shit as it comes up, but effort posting is tiresome, so our effort posts are scattered among more casual mockery. There is one big essay connecting dots written by serious academic (Timnit Gebru and Emile Torres): https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599 . They point out common people between lesswrong, effective altruists, transhumanists, extropians, etc, and explain how the ideologies are related and how they originated.

Also a related irony, Timnit Gebru is interested and has written serious academic papers about algorithmic bias and AI ethics. But for whatever reason (Because she's an actual academic? Because she wrote a paper accurately calling them out? Because of the racists among them who are actually in favor of algorithmic bias?) "AI safety" lesswrong people hate her and are absolutely not interested in working with the AI ethics field of academia. In a world where they were saner and less independent minded cranks, lesswrong and MIRI could tried to get into the field of AI ethics and used that to sanewash and build reputation/respectability for themselves (and maybe even tested their ideas in a field with immediately demonstrable applications instead of wildly speculating about AI systems that aren't remotely close to existing). Instead, they only sort of obliquely imply AI safety is an extension of AI ethics whenever their ideas are discussed in mainstream news sources but don't really maintain the facade if actually pressed on it (I'm not sure how much of it is mainstream reporters trying to sanewash them or deliberate deception on their part).

For a serious but much gentler rebuttal of Effective Altruism, there is this blog: https://reflectivealtruism.com/ . Note this blog was written by an Effective Altruist trying to persuade other EAs of the problem, so they often extend too much credit to EA and lesswrong in an effort to get their points across.

...and I realized you may not have context on the EAs... they are a movement spun off of academic thinking about how to do charity most effectively, and lesswrong was a major early contributor in terms of thinking and members to their movement (they also currently get members from more mainstream recruiting, so it occasionally causes clashes when more mainstream people look around and notice the AI doom-hype and the pseudoscientific racism). So like half EA's work is how to do charity effectively through mosquito nets to countries with malaria problems or paying for nutrition supplements to malnourished children or paying for anti-parasitic drugs to stop... and half their work is funding stuff like "AI safety" research or eugenics think tanks. Oh, and the EA's utilitarian "earn to give" concept was a major inspiration for Sam Bankman Fried trying to make a bunch of money through FTX, so that's another dot connected! (And SBF got a reputation boost from his association with them, and in general their is the issue of billionaire philanthropists reputation laundering and buying influence through philanthropy, so add that to the pile of problems with EA).

Edit: I realized you were actually asking for books about real rationality, not resources deconstructing rationalists... so "Thinking, Fast and Slow" is the book on cognitive biases the Eliezer cribs from. Douglas Hofstadter has a lot of interesting books on philosophical thinking in computer science terms: "Godel, Escher, Bach" and "I am a strange loop". In some ways GEB is dated, but I think that adds context to it that makes it better (in that you can immediately see how the books is flawed so you don't think computer science can replace all other fields). The institute Timnit Gebru is a part of looks like a good source for academic writing on real AI harms: https://www.dair-institute.org (but I haven't actually read most of her work yet, just the TESCREAL essay and skimmed a few of her other writings),

[-] scruiser@awful.systems 4 points 3 months ago* (last edited 3 months ago)

Even without the Sci-fi nonsense, the political elements of the story also feel absurd: the current administration staying on top of the situation and making reasoned (if not correct) responses and keeping things secret feels implausible given current events. It kind of shows the political biases of the authors that they can manage to imagine the Trump administration acting so normally or competently. Oh and the hyper-competent Chinese spies (and the Chinese having no chance at catching up without them) feels like another one of the authors' biases coming through.

[-] scruiser@awful.systems 2 points 5 months ago

I normally think gatekeeping fandoms and calling people fake fans is bad, but it is necessary and deserved in this case to assume Elon Musk is only a surface level fan grabbing names and icons without understanding them.

[-] scruiser@awful.systems 4 points 11 months ago

It's not all the exact same! ~~Friendship is Optimal adds in pony sex~~

[-] scruiser@awful.systems 4 points 11 months ago

There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

/r/rational isn't just for AI fiction, it also ~~claims~~ includes anything with decent verisimilitude, so stuff like The Hatchet and The Martian show up in its recommendation lists also! ~~letting it claim credit for better fiction than the AI stuff~~

[-] scruiser@awful.systems 4 points 2 years ago* (last edited 2 years ago)

I was on the old reddit /r/sneerclub, same username. Finally making an account here. I will probably post a few sneequence classics that haven't been ~~discussed~~ mocked properly before.

view more: ‹ prev next ›

scruiser

joined 2 years ago