[-] YouKnowWhoTheFuckIAM@awful.systems 14 points 4 months ago* (last edited 4 months ago)

They’re all in for Chess as a sine qua non metric for intelligence if you spend enough time staring into the pit. Better to call it g than IQ tbh. Not only does it (chess) have the lick of rigour, it has the nerd cultural cachet and the ludicrous white masculinity complex.

[-] YouKnowWhoTheFuckIAM@awful.systems 12 points 4 months ago* (last edited 4 months ago)

The interesting thing about this is that these people never stop to think that the future they dream off might never happen. Aside from the fact that their cryo company might just go under, they don’t ever consider that in 200 years they might just wake up under a dystopia.

At one time I was going out with someone who was into Max More, without either of us being cogniscant of the rationalist link back then, and she gave me the infuriating justification that it was all a probabilities game with a bizarre political economy in the background. The thinking goes that if your society becomes a dystopia, there’s no reason and/or no resources to wake you up. Looking back, it’s amazing to see it as a combination of that characteristically (neo/)lib failure of imagination and Promethean ideology.

[-] YouKnowWhoTheFuckIAM@awful.systems 13 points 5 months ago* (last edited 5 months ago)

Not to get too corny about it, but there are people in this world who think “don’t condescend” means “be nice about other people’s shortcomings” and people who think it means “you might fucking learn something if you would just stop condescending to people you perceive as having shortcomings”, and the first group is completely oblivious to the difference

Which is fine, actually, kind of. It certainly takes genuine work if for whatever reason you grew up to see things in a particular way. But it’s also completely not fucking fine that there are so many people going about their lives pontificating on the world without a shred of the requisite humility.

[-] YouKnowWhoTheFuckIAM@awful.systems 13 points 6 months ago* (last edited 6 months ago)

I’m saying this goes further!

Actually I feel kind of irked that this reply seems to just miss the part at the end of the paragraph that says “it is, literally, indistinguishable from who they are”

Well this is where I was going with Lakatos. Among the large scale conceptual issues with rationalist thinking is that there isn’t any understanding of what would count as a degenerating research programme. In this sense rationalism is a perfect product of the internet era: there are far too many conjectures being thrown out and adopted at scale on grounds of intuition for any effective reality-testing to take place. Moreover, since many of these conjectures are social, or about habits of mind, and the rationalists shape their own social world and their habits of mind according to those conjectures, the research programme(s) they develop is (/are) constantly tested, but only according to rationalist rules. And, as when the millenarian cult has to figure out what its leader got wrong about the date of the apocalypse, when the world really gets in the way it only serves as an impetus to refine the existing body of ideas still further, according to the same set of rules.

Indeed the success of LLMs illustrates another problem with making your own world, for which I’m going to cheerfully borrow the term “hyperstition” from the sort of cultural theorists of which I’m usually wary. “Hyperstition” is, roughly speaking, where something which otherwise belongs to imagination is manifested in the real world by culture. LLMs (like Elon Musk’s projects) are a good example of hyperstition gone awry: rationalist AI science fiction manifested an AI programme in the real world, and hence immediately supplied the rationalists with all the proof they needed that their predictions were correct in the general if not in exact detail.

But absent the hyperstitional aspect, LLMs would have been much easier to spot as by and large a fraudulent cover for mass data-theft and the suppression of labour. Certainly they don’t work as artificial intelligence, and the stuff that does work (I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be), i.e. transformers and unbelievable energy-spend on data-processing, doesn’t even superficially resemble “intelligence”. With a sensitive critical eye, and an open environment for thought, this should have been, from early on, easily sufficient evidence, alongside the brute mechanicality of the linguistic output of ChatGPT, to realise that the prognostic tools the rationalists were using lacked either predictive or explanatory power.

But rationalist thought had shaped the reality against which these prognoses were supposed to be tested, and we are still dealing with people committed to the thesis that skynet is, for better or worse, getting closer every day.

Lakatos’s thesis about degenerating research programmes asks us to predict novel and look for corroborative evidence. The rationalist programme does exactly the opposite. It predicts corroborative evidence, and looks for novel evidence which it can feed back into its pseudo-Bayesian calculator. The novel evidence is used to refine the theory, and the predictions are used to corroborate a (foregone) interpretation of what the facts are going to tell us.

Now, I would say, more or less with Lakatos, that this isn’t an amazingly hard and fast rule, and it’s subject to different interpretations. But it’s a useful tool for analysing what’s happening when you’re trying to build a way of thinking about the world. The pseudo-Bayesian tools, insofar as they have any impact at all, almost inevitably drag the project into degeneration, because they have no tool for assessing whether the “hard core” of their programme can be borne out by facts.

  1. Say you’re crazy
  2. Say they’re crazy
  3. Get muscular dystrophy when you’re a kid
  4. Marry J. Edgar Hoover
  5. Take up residence in Albania
  6. Stretch yourself on a rack so that you become over 6 1/2 feet tall
  7. Marry your mother
  8. Marry your father
  9. Blow up the state of liberty…

Hey I think some of these are pretty good ideas

https://archive.org/details/2917616.0001.001.umich.edu/page/3/mode/1up

Wait, let me get this straight. His solution to achieve human escape velocity, which means “outpac[ing] AI’s influence and maintain human autonomy” (his words, not mine) is to increase AI’s influence and remove human autonomy?

Well how do YOU plan on shilling for the tech industry by scaring people up about LLMs?

Rage bait? My child, I am an anthropologist

I just want to draw special attention to the reasoning here

BigTech, which critically depends on hyper-targeted ads for the lion share of its revenue, is incapable of offering AI model outputs that are plausible given the location / language of the request. The irony.

  • request from Ljubljana using Slovenian => white people with high probability
  • request from Nairobi using Swahili => black people with high probability
  • request from Shenzhen using Mandarin => asian people with high probability

If a specific user is unhappy with the prevailing demographics of the city where they live, give them a few settings to customize their personal output to their heart's content.

Not gonna say anything in particular about that reasoning, just gonna draw attention to it

Too stupid to debunk without resorting to bullying.

i had a moment and i wanted to share it with everybody

[-] YouKnowWhoTheFuckIAM@awful.systems 14 points 10 months ago* (last edited 10 months ago)

The Sequences are inherently short, there are just massively many of them - the fact that each one is woefully inadequate to its own aims is eclipsed by the size of the overall task.

The longer stuff, Siskind included, is precisely what you get from people with short attention spans who find it takes longer than that to justify the point that they want to make themselves. There’s no structure, no overarching thematic or compositional coherence to each piece, just the unfolding discovery that more points still need to be made. This makes it well-suited for limited readers who think their community’s style longform writing is special, but don’t trust it in authors who have worked on technique (literary technique is suspicious - splurging a first draft onto the internet marks the writer out as honest: rationalism is a 21st century romantic movement, not a scholastic one).

Besides which, the number of people who “read all of” any of these pieces is significantly lower than the number of people who did so.

view more: ‹ prev next ›

YouKnowWhoTheFuckIAM

joined 1 year ago