[-] locallynonlinear@awful.systems 4 points 10 months ago

Feel free to ask Michael in the comments of his blog, he frequently replies, helpfully, with references. I mean all science is tentative, so skepticism is healthy.

[-] locallynonlinear@awful.systems 6 points 10 months ago

"priors updated" was the same desired outcome all along.

[-] locallynonlinear@awful.systems 5 points 10 months ago

In practice, alignment means "control".

And the the existential panic is realizing that control doesn't scale. So rather than admit that goal "alignment" doesn't mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we're constantly in tenuous relationships at the edge of uncertainty with all of it,

it's the end of all meaning aka the robot overlord.

[-] locallynonlinear@awful.systems 6 points 10 months ago

And as my senior dad likes to say, "Ying and Yang Baby"

[-] locallynonlinear@awful.systems 4 points 10 months ago* (last edited 10 months ago)

For what it's worth then, I don't think we're in disagreement, so I just want to clarify a couple of things.

When I say open system economics, I mean from an ecological point of view, not just the pay dollars for product point of view. Strictly speaking, there is some theoritical price and a process, however gruesome, that could force a human into the embodiment of a bird. But from an ecosystems point of view, it begs the obvious question; why? Maybe there is an answer to why that would happen, but it's not a question of knowledge of a thing, or even the process of doing it, it's the economic question in the whole.

The same thing applies to human intelligence, however we plan to define it. Nature is already full of systems that have memory, that can abstract, reason, that can use tools, that are social, that are robust in the face of novel environments. We are unique but not due to any particular capability, we're unique because of the economics and our relationship with all the other things we depend upon. I think that's awesome!

I only made my comment to caution though, because yes, I do think that overall people still put humanity and our intelligence on a pedestal, and I think that plays to rationalist hands. I love being human and the human experience. I also love being alive, and part of nature, and the experience of the ecosystem as a whole. From that perspective, it would be hard for me to believe that any particulart part of human intelligence can't be reproduced with technology, because to me it's already abundant in nature. The question for me, and our ecosystem at large, is when it does occur,

what's the cost? What role, will it have? What regulations, does it warrant? What, other behaviors will it exhibit? And also, I'm ok not being in control of those answers. I can just live, in a certain degree of uncertainty.

[-] locallynonlinear@awful.systems 5 points 10 months ago

Yes, and ultimately this question, of what gets built, as opposed to what is knowable, is an economics question. The energy gradients available to a bird are qualitatively different than those available to industry, or individual humans. Of course they are!

There's no theoritical limit to how close an universal function approximator can get to a closed system definition of something. Bird's flight isn't magic, or unknowable, or non reproduceable. If it was, we'd have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

But in the end, it's not just the knowledge of a thing that matters. It's the whole economics of that thing embedded in its environment.

I guess I violently agree with the observation, but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either. I feel it can be just as reductionist in the end to demand there is no solution than to say that any solution has its trade offs and costs.

[-] locallynonlinear@awful.systems 6 points 10 months ago

It's a good interview, and I really like putting economics here in perspective. If I could pour water on AI hype in a succinct way, I'd say this: capability is again, not the fundamental issue in nature. Open system economics, are.

There are no known problems that can't theoritically be solved, in a sort of pedantic "in a closed system information always converges" sort of way. And there numerous great ways of making such convergence efficient with respect to time, including who knew, associative memory. But what does it, mean? This isn't the story of LLMs or robotics or AI take off general. The real story is the economics of electronics.

Paradoxically, just as electronics is hitting its stride in terms of economics, so are the basic infrastructural economics of the entire system becoming strained. For all the exponential growth in one domain, so too has been the exponential costs in other. Such is ecosystems and open system dynamics.

I do think that there is a future of more AI. I do think there is a world of more electronics. But I don't claim to predict any specifics beyond that. Sitting in the uncertainty of the future is the hardest thing to do, but it's the most honest.

[-] locallynonlinear@awful.systems 5 points 10 months ago

True, there's value. But I think if you try to measure that value, it disappears.

A good postmorterm puts the facts on the table, and leaves the team to evaluate options. I don't think any good postmorterm should have apologies or ask people to settle social conflicts directly. One of the best tools a postmorterm has is the "we're going to work around this problem by reducing the dependency on personal relationships."

[-] locallynonlinear@awful.systems 5 points 11 months ago

Probably has something to do with the whole "We definitely know that race is a strong determinant of humanity, but we acknowledge that race isn't the only determinant if you also already have money or influence and could help us."

[-] locallynonlinear@awful.systems 5 points 11 months ago

Is this an enemy of my enemy is my friend situation? Pinker's naive optimism bubble, is not exactly a perspective I 100% endorse either but hey 🤷

Because we all know Bob won't just fuxking wipe his ass in private. He needs to know we saw it all.

[-] locallynonlinear@awful.systems 5 points 1 year ago* (last edited 1 year ago)

I want to live in space where it's safer.

Good, we feel the same way about that.

view more: ‹ prev next ›

locallynonlinear

joined 1 year ago