[-] scruiser@awful.systems 7 points 2 months ago

Yeah it's really not productive to engage directly.

I'd almost categorize Penrose as a borderline case of noble disease himself for stuff he's said about Quantum Consciousness and relatedly the halting problem and Godel's incompleteness theorem. But he actually has a proposed mechanism (involving microtubules) that is testable and falsifiable and the physics half of what he is talking about is within his domain of expertise.

[-] scruiser@awful.systems 8 points 2 months ago

It's worse than you are remembering! Eliezer has claimed deep neural networks (maybe even something along the lines of llms) could learn to break hashes just through being trained on exposure to hash/plaintext pairs on the training data set.

The original discussion: here about a lesswrong post and here about a tweet. And the original lesswrong post if you want to go back to the source.

[-] scruiser@awful.systems 7 points 3 months ago

I mean... Democrats making dishonest promises of actual leftist solutions would be them making any acknowledgement of actual leftism, so I would count that as net progress compared to their current bland status quo maintenance. But yeah, your overall point is true.

[-] scruiser@awful.systems 8 points 3 months ago

That sounds like actual leftism, so no they really don't have the slightest inkling, they still think mainstream Democrats are leftist (and Democrats with some traces of leftism like Bernie or AOC are radical extremist leftists).

[-] scruiser@awful.systems 7 points 3 months ago

Its the sort of stuff that makes great material for science fiction! It's less fun when you see it in the NYT or quoted by mainstream politicians with plans that will wreck the country.

[-] scruiser@awful.systems 8 points 3 months ago

We're already behind schedule, we're supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!

[-] scruiser@awful.systems 8 points 3 months ago

I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).

[-] scruiser@awful.systems 7 points 11 months ago* (last edited 11 months ago)

Some nitpicks. some of which are serious are some of which are sneers...

consternating about the policy implications of Sam Altman’s speculative fan fiction

Hey, the fanfiction is actually Eliezer's (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

Well actually, you can get something close to as powerful on a personal computer... because the massive size of ChatGPT and the like don't actually improve their performance that much (the most useful thing I think is the longer context window?).

I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)... https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

AI has no initiative. It doesn’t want anything

That’s next on the roadmap though, right? AI agents?

Well... if the way corporations have tried to use ChatGPT has taught me anything, its that they'll misapply AI in any and every way that looks like it might save or make a buck. So they'll slap an API to a AI it into a script to turn it into an "agent" despite that being entirely outside the use case of spewing words. It won't actually be agentic, but I bet it could cause a disaster all the same!

[-] scruiser@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Careful, if you present the problem and solution that way, AI tech bros will try pasting a LLM and a Computer Algebra System (which already exist) together, invent a fancy buzzword for it, act like they invented something fundamentally new, and then devise some benchmarks that break typical LLMs but their Frankenstein kludge can ace, and then sell the hype (actual consumer applications are luckily not required in this cycle but they might try some anyway).

I think there is some promise to the idea of an architecture similar to a LLM with components able to handle math like a CAS. It won't fix a lot of LLM issues but maybe some fundamental issues (like ability to count or ability to hold an internal state) will improve. And (as opposed to an actually innovative architecture) simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let's be honest, is the first thing tech bros will try as it doesn't require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

[-] scruiser@awful.systems 7 points 1 year ago

If it was one racist dude at a conference I could accept it was a horrible oversight on the conference organizers part if they immediately apologized and assured it wouldn't happen again. But 8 racist dudes (or 12 if you count the more mask-on racists) is too many to be accidental or an oversight.

how is that not obvious

Well, probably some of them are deliberately racist HBD advocates, but are mask on enough to play dumb and hand wring and complain about free speech. Some of them have HBD sympathies but aren't quite outright advocates, so they don't condemn the inclusion of racists because of their own sympathies. Some of them are against HBD, but know being too direct and forceful and not framing everything in 8 layers of charity and good-faith assumptions isn't acceptable on the Lesswrong or EA forums so they don't just come out and say what they mean. And some of them actually buy all the rhetoric about charitably and free speech and act as useful idiots or a buffer to the others.

[-] scruiser@awful.systems 8 points 1 year ago

Yudkowsky’s original rule-set

Yeah the original no-politics rule on lesswrong baked in libertarian assumptions into the discourse (because no-politics means the default political assumptions of the major writers and audience are free to take over). From there is was just a matter of time until it ended up somewhere right wing.

“object level” vs “meta level” dichotomy

I hadn't linked the tendency to go meta to the cultishness or no-politics rule before, but I can see the connection now that you point it out. As you say, it prevents simply naming names and direct quotes, which seems to be a pretty good tactic for countering racists.

could not but have been the eventual outcome of the same rule-set

I'm not sure that rule-set made HBD hegemony inevitable, there were a lot of other factors that helped along the way! The IQ-fetishism made it ripe for HBDers. The edgy speculative futurism is also fertile ground for HBD infestation. And the initial audience and writings having a libertarian bend made the no-politics rule favor right wing ideology, an initial audience and writing set with a strong left wing bend might go in a different direction (not that a tankie internal movement would be good, but at least I don't know tankies to be HBD proponents).

just to be normal

Yeah, it seems really rare for a commenter to simply say racism is bad, you shouldn't invite racists to your events. Even the ones that seem to disagree with racism impulsively engage in hand wringing and apologize for being offended and carefully moderate their condemnation of racism and racists.

[-] scruiser@awful.systems 8 points 2 years ago

Right, its a joke, in the sense that the phrase "Caliph" started its usage in a non-serious fashion that got a chuckle, but the way Zack uses it, it really doesn't feel like a joke. It feels like the author genuinely wants Eliezer to act as the central source of authority and truth among the rationalists and thus Eliezer must not endorse the heresy of inclusive language or else it will mean their holy prophet has contradicted the holy scripture causing a paradox.

view more: ‹ prev next ›

scruiser

joined 2 years ago