463
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 28 Jul 2023
463 points (93.6% liked)
Technology
59674 readers
1878 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
So a few tidbits you reminded me of:
You're absolutely right: there's what's called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.
You're correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.
We've had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly "large," although they've gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn't really a thing until the past year.
Retraining image models on generated imagery does seem to cause problems, but I've noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.
Critical reading and thinking was always a requirement, as I believe you say, but certainly it's something needed for interpreting the output of LLMs in a factual context. I don't really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.
Most of the text models released by OpenAI are so-called "Generative Pretrained Transformer" models, with the keyword being "transformer." Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.
Here is an alternative Piped link(s): https://piped.video/viJt_DXTfwA?t=980
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source, check me out at GitHub.
These all align with my understanding! Only thing I'd mention is that when I said "we've not had llms available" I meant "LLMs this powerful ready for public usage". My b
Yeah, that's fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn't rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.