view the rest of the comments
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
I think there's a second, unstated issue at play here: you're experiencing a very deep cognitive bias. An exploit in the human brain.
The human brain is a fantastically complex piece of meat but one of its many issues is the anthropomorphic bias: the tendency to ascribe human traits, especially agency and cognition, to things or animals that do not have those traits.
We tend to believe if it walks like a duck and talks like a duck it must be a duck. ChatGPT is a very complex and highly specialized algorithm that outputs data just like another online human... But 100% of it is just a model processing your input and returning it back out. It talks like a human but is more akin to notepad than it is to us.
To be clear: that bias exists in everyone. We all do this. Anytime I talk about my dog scheming to get my attention I'm hitting that bias. Anytime my robot vacuum interrupts me doing the dishes I talk at it and tell it to go away. I interact with the world around me as though most things are human.
To be fair/to elaborate on that point, your dog is much much closer to human than chatgpt is, we share like 84% of DNA. Most of the same basic emotions like hunger, fear, desire, etc are present as well as the ability to learn and communicate.
Your dog may not be "scheming", because it lacks the ability to plan very far in the future, but it definitely has the intention of getting your attention and tries to figure out in the moment how to do it. Same as a human kid might do.
It is incredibly valuable to act like a dog is human, because dogs do actually share a lot of characteristics. Not all of course, it's still wrong to fully assume a dog is human, but as a quick heuristic it's still valuable a lot (84%? :D) of the time.
Sure: I get that they're not exactly the same. The ChatGPT issue is orders of magnitude more removed from humanity than a dog, but it's a daily example of anthropomorphic bias that is relatable and easy to understand. Just was using it as an example.
when the chat bot starts using my DNA I'm killing it
Yep, this is a very good explanation. Seeing ChatGPT "talk" is immediately associated with sentience, because for your entire life, and millions of years of evolution, apeech was in 99.9% of cases, a sign of aentience. So your brain doesn't even consider it a question, until you consciously stop to think about it.
An interesting way to antromorphizise GPT that's still technically correct is to think of it as having essentially perfect memory. So it doesn't know how to talk, but it has seen so many conversations (literal trillions) that it can recognize the patterns that make up speech and simply "remember" what the most likely combination of words is, given the context, with zero actual "understanding" of language. (Human trainers then fine-tune these guesses to give you the ChatGPT experience)
ChatGPT also fudges "memory" by feeding in all previous prompts (up to a token limit) with whatever you've said latest, which improves the pattern matching.
The best way to do this is to ask it to ask you questions.
Just to make clear because it seems to come up a lot in some responses - I absolutely don't think (and never have) that chatgpt is intelligent, 'understands' what I'm saying to it or what it's saying to me (let alone is accurate!). Older chat bots were very prone to getting in weird loops, or sudden context/topic switches. Chatgpt doesn't do this very often, and I was wondering what the mechanism for keeping it's answers plausibly connected to the topic under discussion and avoiding grammatical cul-de-sacs.
I know it's just a model, I want to understand the difference between it's predictions and the predictions on my Android keyboard. Is it simply considering the entire previous text as it makes its predictions vs just the last few words? Why doesn't it occasionally respond with a hundred thousand word response? Many of the texts it's trained on are longer than it's usual responses. There seems to be some limits and guidance given either through its training data or its response training that guide it beyond "based on the texts I have seen, what is the most likely word." and I was curious if there was a summary what the blend of corpus based prediction and respinse feedback, etc. has been used.
Software engineer here, but not llm expert. I want to address one of the questions you had there.
An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won't generate a giant blurb of text unless you ask it to.
Let's say for example one of its training inputs is a transcription of a conversation. That will be tagged "conversation" by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that "conversation" have responses of 1-2 sentences with x% likelyhood because that's what the transcripts did. Now if another of the training sets is "best selling novels" it'll store that "best selling novels have" responses" that are very long.
Chatgpt will probably insert a couple of tokens before your question to help it figure out what it's supposed to respond: "respond to the user as if you are in a casual conversation"
This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)