view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
When you look at a coffe cup from the side, you know it has a hole in it. Because you imagine, not because it's a reflex.
LLM is basically a point cloud of words. The training uses neural networks and thus pattern recognition. But the llm itself is closer to a database. But hey, sql is also useful for ai (data storage/retrival according to logic).
I'm not an llm expert, by far. But right now they are not much more practical then a find out a bout things helper.
Edit: I do like them. It's been helpful a couple times and i even got gpt4all installed on my computer for fun.
You're looking at this backwards, you know those things because of previous experiences, you predict this might happen due to those.
This is still a matter of prediction, and if that had never happened to you even once, I guarantee you wouldn't look for it.
They're also significantly smaller than our brains and multimodality has been shown to help with reasoning, so, considering they're text only and significantly smaller than our brains, their significantly reduced functionality is to be expected. Especially when you factor in that our brain has verification layers, which have only recently been discovered to work for LLM's, none of them even implement this yet as far as i'm aware.