167
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Jul 2023
167 points (85.5% liked)
Asklemmy
43728 readers
1152 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
Thanks for that article, it was a very interesting read! I think we're mostly agreeing about things :) This stood out to me from there as an encapsulation of the conversation:
"Statistics" is probably an insufficient term for what these things are doing, but it's helpful to pull the conversation in that direction when a lay person using one of those things is likely to assume quite the opposite, that this really is a person in a computer with hopes and dreams. But I agree that it takes more than simply consulting a table to find the most likely next word to, to take an earlier example, write a haiku about Danny DeVito. That's synthesizing two ideas together that (I would guess) the model was trained on individually. That's very cool and deserving of admiration, and could lead to pretty incredible things. I'd expect that the task of predicting words, on its own, wouldn't be stringent enough to force a model to develop "true" intelligence, whatever that means, to succeed during training, but I suppose we'll find out, and probably sooner than we expect.
Well put! I think I kinda misunderstood what you were saying, I guess we sort of reached the same conclusion from different directions. And yeah, it does seem like we're hitting the limits of what can be achieved from the current underlying word-prediction mechanisms alone, with how diminishing the returns are from dumping more data in. Maybe something big will happen soon, but it looks to me like LLMs will stagnate for a while until they're taken in a fundamentally new direction.
Either way, what they can do now is pretty incredible, and equally interesting to me is how it's making us reevaluate our ideas of consciousness and intelligence on a large scale; it's one thing to theorize about what could happen with an 'intelligent' AI, but the reality of these philosophical questions being so thoroughly challenged and dissected in mundane legal and practical matters is wild.