399
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 26 Aug 2023
399 points (85.6% liked)
Technology
59623 readers
1423 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
LLMs fit in the "weak AI" category. I'd be inclined to not call them "AI" at all, since there is no intelligence, just the illusion of intelligence (if I could just redefine the term "AI"). It's possible to build intelligent AI, but probabilistic text construction isn't even close.
What does intelligent AI that we can currently build look like?
There's "can build" and "have built". The basic idea is about continuously aggregating data and performing pattern analysis and basically cognitive schema assimilation/accommodation in the same way humans do. It's absolutely doable, at least I think so.
I haven't heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we've had with symbolic AI in the past though.
It's a concept from psychology. Instead of just a model of linguistic construction, the model has to actually be a comprehensive, data-forged model of reality as far as human observation goes/we care about. In poorly tuned, low-information scenarios, it would fall mostly into the same traps human do (e.g. falling for propaganda or pseudoscientific theories) but, if finely tuned, should emulate accurate theories and even predictive results with an expansive enough domain.