167
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Jul 2023
167 points (85.5% liked)
Asklemmy
43728 readers
1702 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
It's not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don't understand how. At all. We know enough to build it and from there it's all a magical blackbox. For this reason it's hard to be certain if it will get even better, although there's no reason it couldn't.
That goes back to the "not knowing how it works" thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There's no obvious way to force it to care if it's output is right or just right-looking, though. Until we solve that problem somehow, it's more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.
Honestly crypto wasn't totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it's still king.
Putting some expert system in front of LLMs seems to be working pretty well. Basically modeling how a human agent would interact with it.
We'll see how that goes, I guess. I'm not involved enough to comment.
I'm guessing the expert system would be a classical algorithm?