There's an "I am no man" joke in here somewhere that I am too tired to figure out.
to placate trans ideology
Let's see who he reads. Vox Day (who is now using ChatGPT to "disprove" evolution), Christopher Rufo, Curtis Yarvin, Emil Kirkegaard, Mars Review model Bimbo Ubermensch.... It's a real Who's Who of Why The Fuck Do I Know Who These People Are?!
Seems overly generous both to Christopher Hitchens and to Julia Galef.
(putting on an N95 before I enter the grocery store) dun dun DUN DUN dun dun DUN DUN deedle dee deedle dee DUN DUN
The more expertise you have, the more you can use ChatGPT as an idea collaborator, and use your own discernment on the validity of the ideas.
Good grief. Just take drugs, people.
Don't worry; this post is not going to be cynical or demeaning to you or your AI companion.
If you're worried that your "AI companion" can be demeaned by pointing out the basic truth about it, then you deserve to be demeaned yourself.
The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.
https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory
To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.
an hackernews:
a high correlation between intelligence and IQ
motherfuckers out here acting like "intelligence" is sufficiently well-defined that a correlation between it and anything else can be computed
intelligence can be reasonably defined as "knowledge and skills to be successful in life, i.e. have higher-than-average income"
eat a bag of dicks
Some of Kurzweil's predictions in 1999 about 2009:
- “Unused computes on the Internet are harvested, creating … human brain hardware capacity.”
- “The online chat rooms of the late 1990s have been replaced with virtual environments…with full visual realism.”
- “Interactive brain-generated music … is another popular genre.”
- “the underclass is politically neutralized through public assistance and the generally high level of affluence”
- “Diagnosis almost always involves collaboration between a human physician and a … expert system.”
- “Humans are generally far removed from the scene of battle.”
- “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion”
- “Cables are disappearing.”
- “grammar checkers are now actually useful”
- “Intelligent roads are in use, primarily for long-distance travel.”
- “The majority of text is created using continuous speech recognition (CSR) software”
- “Autonomous nanoengineered machines … have been demonstrated and include their own computational controls.”
Carl T. Bergstrom, 13 February 2023:
Meta. OpenAI. Google.
Your AI chatbot is not hallucinating.
It's bullshitting.
It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.
I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.
River crossing puzzles are a genre of logic problems that go back to the olden days. AI slop bots can act like they can solve them, because many solutions appear in their training data. But push the bot a little harder, and funny things happen.