463
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

OpenAI just admitted it can't identify AI-generated text. That's bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

you are viewing a single comment's thread
view the rest of the comments
[-] cerevant@lemmy.world 0 points 1 year ago

If it could, it couldn’t claim that the content out produced was original. If AI generated content were detectable, that would be a tacit admission that it is entirely plagiarized.

[-] howrar@lemmy.ca 6 points 1 year ago

Being detectable does not mean plagiarism. The way they did it was by using a fixed rule for generating high entropy words. These are words that can be replaced with a large number of different words without changing the meaning of the sentence. Given any original passage of text, it's very unlikely for those words to all exactly follow the rule set by the generator, but a generated text will always have this rule followed, so they can be distinguished. Likewise, You can take any original passage and replace words in this fashion to increase the odds of it being detected as AI generated and the resulting text will still be original text.

[-] cerevant@lemmy.world 1 points 1 year ago

Here's the thing though - the probabilities for word choice come from the data the model was trained on. While someone that uses a substantially different writing style / word choice than the LLM could easily be identified as being not from the LLM, someone with a similar writing style might be indistinguishable from the LLM.

Or, to oversimplify: given that Reddit was a large portion of the input data for ChatGPT, all you need to do is write like a Redditor to sound like ChatGPT.

[-] gedhrel@lemmy.ml 1 points 1 year ago

I think you're trying to handwave at someone who knows more about the steganographic watermarking approach than you do.

[-] cerevant@lemmy.world 2 points 1 year ago

AI content isn’t watermarked, or detection would be trivial. What he’s talking about is that certain words have a certain probability of appearing after certain other words in a certain context. While there is some randomness to the output, certain words or phrases are unlikely to appear because the data the model was based on didn’t use them.

All I’m saying is that the more a writer’s writing style and word choice are similar to the data set, the more likely their original content would be flagged as AI generated.

this post was submitted on 28 Jul 2023
463 points (93.6% liked)

Technology

59674 readers
2075 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS