903
Help.
(mander.xyz)
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
This is a science community. We use the Dawkins definition of meme.
People personifying and emotionally attaching themselves to a virtual avatar without a human voice is a slippery slope to LLM love. You are attached to something with zero apparent human characteristics at that point. Replacing the brain behind it with an LLM is just a small step from that. Specialized chat LLMs are incredibly good at seeming human.
Just using TTS itself for random stuff is different, but when you build an entire character around it that people are supposed to see as a person, then yes obviously.
The more human characteristics you remove, the harder it is to differentiate between something human and non human. Whether that is a problem or not is up to you to decide.
[[citation needed]]
people are not stupid. there is a difference between Zentreya and ELIZA.
Not really no. Not everything needs a citation. This was never an attempt at writing a scientific paper in a comment, just my own observation of people and internet culture. Make your own judgments.
Also yes i was thinking of things like Zentreya (idk what ELIZA is). There is a difference between each step in this progression i layed out in my first comment, but it is none the less all on the same axis of incremental removal of human characteristics.
To the average occasional viewer, the perceived jump from Zentreya to neurosama isnt very big.
ELIZA is a chatbot from the 1960s that people fell in love with despite it only replying to things like "i am thinking about " with "what do you think about ?". your order of events is backwards, which is why i need a citation that it works the way you say.
you're talking about the ELIZA effect, named after it, but conflating two things: ascribing human characteristics to a machine, and assuming that humans with machine characteristics are the same thing. which is a bit ableist.
I didnt say it was "the same thing" i said that its two things next to each other on the same directional trend axis of people requiring less and less to see something as representing a human interaction.
And yes of course a video of a virtual avatar with a TTS voice is less human like than a video of a real person with a real voice. Thats not ableist thats just reality. Questioning the humanity of the person behind those things would be ableist. In the first place, none of the things i originally listed are actually human, because they are fucking video files on the internet. Im not talking about the people behind it, but about the content they create and how closely it resembles the experience of sitting across the room from a real physical human.
But back to the topic. Of course morally, there is a meaningfully big jump between "has a human behind it" and "no human involved at all", but if the user doesnt care or is unable to tell the difference, then that difference might as well be non existent when it comes to how society treats both. If people start treating robots like humans then whats the difference between humans and robots?
Well its the fact that one IS human and the other is NOT and i think its important not to blur that line too much. At the end of the day people seem to be very fucking willing to blur that line and that is actually a big sociological problem that is bound to become a legal problem sooner or later.
let's just blame my use of "conflating" on ESL and move on.
i think the two things you are saying follows are actually two different ideas converging. it's worth keeping an eye on, but as i said i don't believe it to be a linear relationship. people have been falling in love with robots since before they could express themselves, no slippery slope needed. so i don't think that's the sticking point.
Yeah you might be right. Thanks for your responses either way :)