[-] diz@awful.systems 3 points 3 weeks ago* (last edited 3 weeks ago)

I think the question of "general intelligence" is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any "general intelligence" working towards some overarching goal.

The other issue with Yudkowsky is that he's an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn't even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.

Humanity itself doesn't possess that kind of intelligence envisioned for "AGI". When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.

[-] diz@awful.systems 4 points 3 weeks ago* (last edited 3 weeks ago)

To argue by analogy, it’s not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.

I suspect that "artificial intelligence" may be a bit more like making an artificial bird that self replicates, with computers and AI as it exists now being somewhere in-between thrown rocks and gliders.

We only ever "beat" biology by cheating via removing a core requirement of self replication. An airplane factory that has to scavenge for all the rare elements involved in making a turbine, would never fly. We had never actually beaten biology on anything.

That "cheat code" shouldn't be expected to apply to skynet or ASI or whatever, because skynet is presumably capable of self replication. Would be pretty odd if "ASI" would be the first thing that we actually beat biology on.

[-] diz@awful.systems 4 points 3 weeks ago* (last edited 3 weeks ago)

The thing about synapses etc argument is that the hype crowd argues that perhaps the AI could wind up doing something much more effective than what-ever-it-is-that-real-brains-do.

If you look at capabilities, however, it is inarguable that "artificial neurons" seem intrinsically a lot less effective than real ones, if we consider small animals (like e.g. a jumping spider or a bee, or even a roundworm).

It is a rather unusual situation. When it comes to things like e.g. converting chemical energy to mechanical energy, we did not have to fully understand and copy muscles to be able to build a steam engine that has higher mechanical power output than you could get out of an elephant. That was the case for arithmetic, too, and hence there was this expectation of imminent AI in the 1960s.

I think it boils down to intelligence being a very specific thing evolved for a specific purpose, less like "moving underwater from point A to point B" (which submarine does pretty well) and more like "fish doing what fish do". The submarine represents very little progress towards fishiness.

[-] diz@awful.systems 3 points 1 month ago* (last edited 1 month ago)

To be entirely honest I don’t even like the arguments against EDT.

Smoking lesion is hilarious. So theres a lesion that is making people smoke. It is also giving them cancer in some unrelated way which we don’t know, trust me bro. Please bro don’t leave this decision to the lesion, you gotta decide to smoke, it would be irrational to decide not to smoke if the lesion’s gonna make you smoke. Correlation is not causation, gotta smoke, bro.

Obviously in that dumb ass hypothetical, the conditional probability is conditional on the decision, not on the lesion, and the smoking in cancer cases is conditional on the lesion, not on the decision. If those two were indistinguishable then the right decision would be not to smoke. And more generally, adopting causal models without statistical data to back them up is called “being gullible”.

The tobacco companies actually did manufacture the data, too, thats where “type-A personality” comes from.

[-] diz@awful.systems 5 points 1 month ago* (last edited 1 month ago)

Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.

Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).

Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).

Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.

[-] diz@awful.systems 3 points 1 month ago* (last edited 1 month ago)

Well yeah but the new age ones overthink everything. Edit: I suspect you could probably find one of them spelling it out.

[-] diz@awful.systems 4 points 1 month ago* (last edited 1 month ago)

Embryo selection may just be the eugenicist's equivalent of greenwashing.

Eugenicists doing IVF is kind of funny, since it is a procedure that circumvents natural selection quite a bit, especially for the guys. It's what, something like billion to one for the sperm?

If they're doing IVF, being into eugenics, they need someone to tell them that they aren't "worsening the species", and the embryo selection provides just that.

edit: The worse part would be if people who don't need IVF start doing IVF with embryo selection, expecting some sort of benefit for the offspring. With American tendency to sell people unnecessary treatments and procedures, I can totally see that happening.

[-] diz@awful.systems 1 points 2 months ago

So it got them so upset presumably because they thought it mocked the basilisk incident, I guess with Roko as Laurentius and Yudkowsky as the other guy?

[-] diz@awful.systems 3 points 3 months ago

I don't think we need to go as far as evopsych here... it may just be an artifact of modeling the environment at all - you learn to model other people as part of the environment, you re-use models across people (some people are mean, some people are nice, etc).

Then weather happens, and you got yourself a god of bad weather and a god of good weather, or perhaps a god of all weather who's bipolar.

As far as language goes it also works the other way, we over used these terms in application to computers, to the point that in relation to computers "thinking" no longer means it is actually thinking.

view more: ‹ prev next ›

diz

joined 2 years ago