45
A computer can never be held accountable
(simonwillison.net)
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
On morality:
Shifting the goalposts from "it's pessimistic" to "it depends on criteria".
I was being simplistic to keep it less verbose. I didn't talk about consistency, for example, even if it also matters.
By "reward/punishment" I mean something closer to behaviourism than to law. I could've called them "negative stimulus" and "positive stimulus" too, it ends the same.
For the bigots implementing and supporting such idiotic laws? Yes, they consider being gay immoral. That's the point for them.
For gay people? It throws them into a catch-22, because following such a law is also punishing: they're forced into celibate and prevented from expressing their sexuality and sexual identity.
So even considering your example, rooting morality into punishment and reward still works. And you can even retrieve a few conclusions out of it:
Back to AI. Without ability to be rewarded/punished, not even a hypothetical AGI would be a moral agent. At most it would be able to talk about the topic, but not generate a consistent set of moral rules for itself. And no, model feeding, tweaking parameters, etc. are clearly not reward/punishment.
That's circular reasoning, given that the "correct decisions" will be dictated by moral.
That does not address the request.
I'll rephrase it: since you disagree that moral values ultimately come from reward and punishment, I asked where you think that they come from. For example, plenty of the moral philosophies that you mentioned root their moral values into superstitions, like "God" or similar crap.
Language and reasoning:
That's likely false, and also bullshit (i.e. a claim made up with no regards to its truth value).
While language and reasoning do interact with each other, "there is a substantial and growing body of evidence from across experimental fields indicating autonomy between language and reasoning" (paywall-free link.
That already dismantles your argument on its central point. But I'll still dig further into it.
They don't even capture language as a whole, let alone a different system like reasoning.
They output decent grammar and vocab. But they handle notoriously poorly meaning (semantics) and utterance purpose (pragmatics). They show blatant signs of not knowing what they are outputting.
You can test this by yourself by asking any LLM-powered chatbot of your choice about some topic that you know by heart. Then looking at the incorrect answers, and asking yourself why the bot is outputting that wrong piece of info ("hallucination").
Example here.
Except the fact that it directly contradicts your claim.
Emphasis mine.
This is the cherry of the cake because it shows that you're wasting my time with a subject that you're completely clueless about. I'm saying that those systems are amoral, not immoral.
Sorry to be blunt but I'm not wasting my time further with you.