79

I hear people saying things like "chatgpt is basically just a fancy predictive text". I'm certainly not in the "it's sentient!" camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it's predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that's pretty interesting. Tbh, I'm more impressed by chatgpt's ability to appearing to "understand" my prompts than I am by the quality of the output. Even though it's writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I've asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

you are viewing a single comment's thread
view the rest of the comments
[-] SorteKanin@feddit.dk 29 points 9 months ago

it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

But that is all that's going on. It has just been trained on so much text that the predictions "learn" the grammatical structure of language. Once you can form coherent sentences, you're not that far from ChatGPT.

The remarkable thing is that prediction of the next word seems to be "sufficient" for ChatGPT's level of "intelligence". But it is not thinking or conscious, it is just data and statistics on steroids.

[-] datavoid@lemmy.ml 12 points 9 months ago

Try to use it to solve a difficult problem and it will become extremely obvious that it has no idea what it is talking about.

Yup. I used it to try to figure out why our Java code was getting permission denied on jar files despite being owned by the user running the code and 777 permissions while upgrading from rhel7 to 8

It gave me some good places to check, but the answer was that rhel8 uses fapolicyd instead of selinux (which I found myself on some tangentially related stack exchange post)

[-] Dran_Arcana@lemmy.world 8 points 9 months ago

The magic sauce is context length within reasonable compute restraints. Phone predictive text has a context length of like 2-3 words, ChatGPT (and other LLMs) have figured out how to do predictions on thousands or tens of thousands of words of context at a time.

[-] doublejay1999@lemmy.world 4 points 9 months ago

It’s that why is compute heavy ?

[-] Dran_Arcana@lemmy.world 7 points 9 months ago

Correct, and the massive databases of long-length context associations are why you need tens to hundreds of gigabytes worth of ram/vram. Disk would be too slow

[-] LesserAbe@lemmy.world 5 points 9 months ago

I think this explanation would be more satisfying if we had a better understanding of how the human brain produces intelligence.

[-] SorteKanin@feddit.dk 3 points 9 months ago

I agree. We don't actually know that the brain isn't just doing the same thing as ChatGPT. It probably isn't, but we don't really know.

[-] Dran_Arcana@lemmy.world 0 points 9 months ago

Considering that we can train digital statistical models to read thoughts via brain scans I think it's more likely than not that we are more similar

this post was submitted on 20 Jan 2024
79 points (92.5% liked)

No Stupid Questions

35764 readers
497 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS