I find it fascinating how oblivious people pretend to be about what our natural social hierarchies are, making fringe speculations ranging from proto-capitalism, over alpha male fantasies, to proto-communism.
Maybe it's too obvious, or too boring, but it's families. Incidentally, happens to be the same for actual, natural packs of wolves.
I don't disagree with most of you wrote, just one nitpick and a comment:
No, but the product of all that, to which all that would be a means to the end that is its product. I elaborated this in a reply to the comment you wrote just previously.
That would undoubtedly be very good, but let me take this opportunity to clarify something of what AI is and isn't: LLMs are indeed just autocomplete on steroids. And humans are indeed just [replicate] on steroids. LLMs are just transistors switching, and humans are just molecular dynamics.
The real question is what succeeding in the objective (replicate for humans, predict text for LLMs) implies. Irrespective of the underlying nature (molecular dynamics, semiconductors) unless we want to make this debate religious, which i am not qualified to participate in. The human objective implied, clearly, everything you can see of humanity. The LLM objective implies modeling and emulating human cognition. Not perfectly, not all of it, but enough of it that it should be making it a greater ethical issue than most people, on any side (commercial: deny because business, Anti-AI: deny because trivializing), are willing to admit.