44
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 18 Jun 2023
44 points (100.0% liked)
Technology
37705 readers
379 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
I've heard this theory. Feels like unrealistic hopeful wishes of people who want AI to fail.
LLM processing will be a huge tool for pruning and labeling training sets. Humans can sample and validate the work. These better training sets will produce better LLMs.
Who cares is a chunk of text was written by a human or not? Plenty of humans are shit writers who believe illogical or clearly incorrect things. The idea that human origin text is superior is a fantasy. chatGPT is a better writer than 80% of humans todat. In 10 years LLMs will be better than 99.9% of humans. There is no poison to be avoided.
chatGPT has an apparent style when used in the default mode, but you can already get away from that with simple prompt tweaks. This whole thing is a non-issue.
LLM generated text can also be easily detected provided you can figure out which model it came from and the weights within it. For people training models, this won't be hard to do.
I agree with the take that getting better and better datasets for training is going to get easier over time, rather than harder. The story of AlphaZero is a good example of this too - the best chess AI quickly trounced any AI trained on human games simply by playing against itself. To me, that suggests that training on LLM output will lead to even better results, since you can generate so much more of it.
The chess engine's training is anchored by the win/lose outcome of the game. LLM training is anchored by what humans like to read and write. This means that a human needs to somehow be in the loop.
I think OpenAI's own chatGPT detector had double digit false negative and positive rates. I expect as diversity of LLMs proliferates, it will become increasingly harder to detect.