[-] diz@awful.systems 8 points 1 month ago

I think I figured it out.

He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.

[-] diz@awful.systems 10 points 1 month ago* (last edited 1 month ago)

I am also presuming this is about purely non-fiction technical books

He has Dune on his list of worlds to live in, though...

edit: I know. he fed his post to AI and asked it to list the fictional universes he'd want to live in, and that's how he got Dune. Precisely the information he needed.

[-] diz@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Naturally, that system broke down (via capitalists grabbing the expensive fusion power plants for their own purposes)

This is kind of what I have to give to Niven. The guy is a libertarian, but he would follow his story all the way into such results. And his series where organs are being harvested for minor crimes? It completely flew over my head that he was trying to criticize taxes, and not, say, republican tough-on-crime, mass incarceration, and for profit prisons. Because he followed the logic of the story and it aligned naturally with its real life counterpart, the for profit prison system, even if he wanted to make some sort of completely insane anti tax argument where taxing rich people is like harvesting organs or something.

On the other hand, much better regarded Heinlein, also a libertarian, would write up a moon base that exports organic carbon and where you have to pay for oxygen to convert to CO2. Just because he wanted to make a story inside of which "having to pay for air to breathe" works fine.

[-] diz@awful.systems 9 points 1 month ago* (last edited 1 month ago)

I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that "its just the rotor blades chopping sunlight" when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.

Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.

And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.

edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.

Yudkowsky is a dumbass layman posing as an expert, and he's playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.

[-] diz@awful.systems 9 points 1 month ago

I wonder what's gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.

[-] diz@awful.systems 8 points 3 months ago* (last edited 3 months ago)

He’s such a complete moron. He doesn’t want to recite “DEI shibboleths”? What does he even think that would refer to? Why shibboleths?

To spell it out, that would refer to an antisemitic theory that the reason (for example) some black guy would get a medal of honor (the “deimedal”) is because of the jews.

I swear this guy is dumber than Trump. Trump for all his rambling, uses actual language - Trump understands what the shit he is saying means to his followers. Scott… he really does not.

[-] diz@awful.systems 8 points 3 months ago* (last edited 3 months ago)

I just describe it as "computer scientology, nowhere near as successful as the original".

The other thing is that he's a Thiel project, different but not any more sane than Curtis Yarvin aka Moldbug. So if they heard of moldbug's political theories (which increasingly many people heard about because of, well, them being enacted) it's easy to give a general picture of total fucking insanity funded by thiel money. It doesn't really matter what the particular insanity is, and it matters even less now as the AGI shit hit mainstream entirely bypassing anything Yudkowsky had to say on the subject.

[-] diz@awful.systems 7 points 1 year ago

Frigging exactly. Its a dumb ass dead end that is fundamentally incapable of doing vast majority of things ascribed to it.

They keep imagining that it would actually learn some underlying logic from a lot of text. All it can do is store a bunch of applications of said logic, as in a giant table. Deducing underlying rules instead of simply memorizing particular instances of rules, that's a form of compression, there wasn't much compression going on and now that the models are so over-parametrized, even less.

[-] diz@awful.systems 9 points 1 year ago

Perhaps it was near ready to emit a stop token after "the robot can take all 4 vegetables in one trip if it is allowed to carry all of them at once." but "However" won, and then after "However" it had to say something else because that's how "however" works...

Agreed on the style being absolutely nauseating. It wasn't a very good style when humans were using it, but now it is just the style of absolute bottom of the barrel, top of the search results garbage.

[-] diz@awful.systems 7 points 1 year ago

The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.

Yeah.

I think the failure could be extremely fundamental - maybe local optimization of a highly parametrized model is fundamentally unable to properly learn counting (other than via memorization).

After all there's a very large number of ways how a highly parametrized model can do a good job of predicting the next token, which would not involve actual counting. What makes counting special vs memorization is that it is relatively compact representation, but there's no reason for a neural network to favor compact representations.

The "correct" counting may just be a very tiny local minimum, with tall hill all around it and no valley leading there. If that's the case then local optimization will never find it.

[-] diz@awful.systems 9 points 1 year ago* (last edited 1 year ago)

I think you can make a slight improvement to Wolfram Alpha: using an LLM to translate natural language queries into queries WA can consume, then feeding them into WA. WA always reports exactly what it computed, so if it "misunderstands" you, it's a lot easier to notice.

The problem here is that AI boys got themselves hyped up for it being actually intelligent, so none of them would ever settle for some modest application of LLMs. Google fired the authors of "stochastic parrot" paper, AFAIK.

simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let’s be honest, is the first thing tech bros will try as it doesn’t require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

Yeah I have examples of that as well. I asked GPT4 at work to calculate the volume of 10cm long, 0.1mm diameter wire. It seems to be doing correct arithmetic by some mysterious means which do not use scientific notation, and then the LLM can not actually count so it miscounts zeroes and outputs a result that is 1000x larger than the correct answer.

[-] diz@awful.systems 9 points 1 year ago

GPT4 supposedly (it says that it is GPT4). I have access to one that is cleared for somewhat sensitive data, so presumably my queries aren't getting flagged and human reviewed by OpenAI.

view more: ‹ prev next ›

diz

joined 2 years ago