[-] blakestacey@awful.systems 7 points 16 hours ago

River crossing puzzles are a genre of logic problems that go back to the olden days. AI slop bots can act like they can solve them, because many solutions appear in their training data. But push the bot a little harder, and funny things happen.

[-] blakestacey@awful.systems 11 points 1 day ago

There's an "I am no man" joke in here somewhere that I am too tired to figure out.

[-] blakestacey@awful.systems 14 points 2 days ago

to placate trans ideology

[-] blakestacey@awful.systems 12 points 4 days ago* (last edited 4 days ago)

Let's see who he reads. Vox Day (who is now using ChatGPT to "disprove" evolution), Christopher Rufo, Curtis Yarvin, Emil Kirkegaard, Mars Review model Bimbo Ubermensch.... It's a real Who's Who of Why The Fuck Do I Know Who These People Are?!

[-] blakestacey@awful.systems 10 points 5 days ago

Seems overly generous both to Christopher Hitchens and to Julia Galef.

[-] blakestacey@awful.systems 17 points 5 days ago

(putting on an N95 before I enter the grocery store) dun dun DUN DUN dun dun DUN DUN deedle dee deedle dee DUN DUN

[-] blakestacey@awful.systems 16 points 5 days ago

The more expertise you have, the more you can use ChatGPT as an idea collaborator, and use your own discernment on the validity of the ideas.

Good grief. Just take drugs, people.

[-] blakestacey@awful.systems 31 points 5 days ago

Don't worry; this post is not going to be cynical or demeaning to you or your AI companion.

If you're worried that your "AI companion" can be demeaned by pointing out the basic truth about it, then you deserve to be demeaned yourself.

31

Mother Jones has a new report about Jordan Lasker:

A Reddit account named Faliceer, which posted highly specific biographical details that overlapped with Lasker’s offline life and which a childhood friend of Lasker’s believes he was behind, wrote in 2016, “I actually am a Jewish White Supremacist Nazi.” The Reddit comment, which has not been previously reported, is one of thousands of now-deleted posts from the Faliceer account obtained by Mother Jones in February. In other posts written between 2014 and 2016, Faliceer endorses Nazism, eugenics, and racism. He wishes happy birthday to Adolf Hitler, says that “I support eugenics,” and uses a racial slur when saying those who are attracted to Black people should kill themselves.

61

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

[-] blakestacey@awful.systems 58 points 1 month ago* (last edited 1 month ago)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

28
submitted 7 months ago* (last edited 7 months ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

The logic the professor gives completely baffles me:

"Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

The sales talk in the news release is really going all in selling that undercoat.

Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

Back in my day, we called that "having a book" and "writing a lesson plan".

Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

(Hat tip.)

[-] blakestacey@awful.systems 40 points 11 months ago* (last edited 11 months ago)

an hackernews:

a high correlation between intelligence and IQ

motherfuckers out here acting like "intelligence" is sufficiently well-defined that a correlation between it and anything else can be computed

intelligence can be reasonably defined as "knowledge and skills to be successful in life, i.e. have higher-than-average income"

eat a bag of dicks

23

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

[-] blakestacey@awful.systems 41 points 1 year ago* (last edited 1 year ago)

Some of Kurzweil's predictions in 1999 about 2009:

  • “Unused computes on the Internet are harvested, creating … human brain hardware capacity.”
  • “The online chat rooms of the late 1990s have been replaced with virtual environments…with full visual realism.”
  • “Interactive brain-generated music … is another popular genre.”
  • “the underclass is politically neutralized through public assistance and the generally high level of affluence”
  • “Diagnosis almost always involves collaboration between a human physician and a … expert system.”
  • “Humans are generally far removed from the scene of battle.”
  • “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion”
  • “Cables are disappearing.”
  • “grammar checkers are now actually useful”
  • “Intelligent roads are in use, primarily for long-distance travel.”
  • “The majority of text is created using continuous speech recognition (CSR) software”
  • “Autonomous nanoengineered machines … have been demonstrated and include their own computational controls.”
[-] blakestacey@awful.systems 38 points 1 year ago

Carl T. Bergstrom, 13 February 2023:

Meta. OpenAI. Google.

Your AI chatbot is not hallucinating.

It's bullshitting.

It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.

Me, 2 February 2023:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.

19

a lesswrong: 47-minute read extolling the ambition and insights of Christopher Langan's "CTMU"

a science blogger back in the day: not so impressed

[I]t’s sort of like saying “I’m going to fix the sink in my bathroom by replacing the leaky washer with the color blue”, or “I’m going to fly to the moon by correctly spelling my left leg.”

Langan, incidentally, is a 9/11 truther, a believer in the "white genocide" conspiracy theory and much more besides.

18

In which a man disappearing up his own asshole somehow fails to be interesting.

6
submitted 2 years ago* (last edited 2 years ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

6

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

1

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

1

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

2

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

2

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

view more: next ›

blakestacey

joined 2 years ago
MODERATOR OF