295
submitted 1 year ago* (last edited 1 year ago) by excel@lemmy.megumin.org to c/technology@lemmy.world

I keep seeing posts about this kind of thing getting people's hopes up, so let's address this myth.

What's an "AI detector"?

We're talking about these tools that advertise the ability to accurately detect things like deep-fake videos or text generated by LLMs (like ChatGPT), etc. We are NOT talking about voluntary watermarking that companies like OpenAI might choose to add in the future.

What does "effective" mean?

I mean something with high levels of accuracy, both highly sensitive (low false negatives) and highly specific (low false positives). High would probably be at least 95%, though this is ultimately subjective.

Why should the accuracy bar be so high? Isn't anything better than a coin flip good enough?

If you're going to definitively label something as "fake" or "real", you better be damn sure about it, because the consequences for being wrong with that label are even worse than having no label at all. You're either telling people that they should trust a fake that they might have been skeptical about otherwise, or you're slandering something real. In both cases you're spreading misinformation which is worse than if you had just said "I'm not sure".

Why can't a good AI detector be built?

To understand this part you need to understand a little bit about how these neural networks are created in the first place. Generative Adversarial Networks (GANs) are a strategy often employed to train models that generate content. These work by having two different neural networks, one that generates content similar to existing content, and one that detects the difference between generated content and the existing content. These networks learn in tandem, each time one network gets better the other one also gets better.

That this means is that building a content generator and a fake content detector are effectively two different sides of the same coin. Improvements to one can always be translated directly and in an automated way into improvements into the other one. This means that the generator will always improve until the detector is fooled about 50% of the time.

Note that not all of these models are always trained in exactly this way, but the point is that anything CAN be trained this way, so even if a GAN wasn't originally used, any kind of improved detection can always be directly translated into improved generation to beat that detection. This isn't just any ordinary "arms race", because the turn around time here is so fast there won't be any chance of being ahead of the curve... the generators will always win.

Why do these "AI detectors" keep getting advertised if they don't work?

  1. People are afraid of being saturated by fake content, and the media is taking advantage of that fear to sell snake oil
  2. Every generator network comes with its own free detector network that doesn't really work all that well (~50% accuracy) because it was used to create the generator originally, so these detectors are ubiquitous among AI labs. That means the people that own the detectors are the SAME PEOPLE that created the problem in the first place, and they want to make sure you come back to them for the solution as well.
top 50 comments
sorted by: hot top controversial new old
[-] Zeppo@sh.itjust.works 54 points 1 year ago

Good summary of the issues. I've been fairly disappointed with what a lot of people think the AI text generators are good for - replacement for search engines, magic oracle that can tell you any fact, something to write legal briefs. And the people who generate documents and then don't even proof read or fact checking them before using them for something important... Some uses are good, like basic code generation for programming tasks, but many are just silly.

The instances where some professor with zero clue about how AI text generation works or the issues you outline here has told a student "My AI detector said this was generated!" have been absurd, like one professor with obvious serious misunderstandings told a student "I asked ChatGPT if it wrote this and it said yes."

[-] Crackhappy@lemmy.world 9 points 1 year ago

Not to mention that this "AI" is in no way actually AI. It's just ML taken to a new level.

[-] SkaveRat@discuss.tchncs.de 19 points 1 year ago

It's not an AGI, but it's still AI

[-] deong@lemmy.world 9 points 1 year ago

There’s no real distinction between the two. We don’t have a definition of AI or intelligence — never have. Inside the field, ML has some recognized connotations, but outside of specialist literature, they’re just marketing fluff.

[-] Zeppo@sh.itjust.works 7 points 1 year ago

It's interesting that it started a conversation about "if this thing can make output exactly like a human, does it matter?" but I agree... it's not conscious or 'thinking' about what it says. The output sure can be convincing, though.

[-] FlyingSquid@lemmy.world 2 points 1 year ago

I think a huge way that it matters is that it doesn't ask questions.

[-] Zeppo@sh.itjust.works 2 points 1 year ago

That's a very good point. Even Eliza asked questions (and the last thing we need now is a ChatGPT therapist mode). It's also a matter of what it's programmed to do, but I don't believe that the system has awareness or curiosity.

[-] Crackhappy@lemmy.world 3 points 1 year ago

There is a fundamental difference between recombinant regurgitation and creation.

[-] Aux@lemmy.world 6 points 1 year ago

The biggest issue with publicly available ML based text tools is that they're American centric. Detection of ChatGPT in the UK is simple - it creates texts using American spelling. And if you live outside of English speaking world, like most humans do, it's completely useless.

[-] jochem@lemmy.ml 7 points 1 year ago

ChatGPT speaks other languages. It's actually a really good translator.

I just asked it to describe an organization using UK English and it indeed used 'organisation' instead (didn't check for other words).

[-] MBM@lemmings.world 3 points 1 year ago

Can it understand and create new compound words (in a language like German)? That's an issue I have with most spell checks and translators as well, it's forcing the language to be more like English

[-] bleistift2@feddit.de 6 points 1 year ago

Erfinde ein Rezept für Mohrrübensaftdressing [carrot juice dressing].

Rezept für Mohrrübensaftdressing: […] Beginne damit, frischen Mohrrübensaft herzustellen

I’d say it works.

[-] jochem@lemmy.ml 4 points 1 year ago

Looks like it does: https://chat.openai.com/share/1b487711-c1be-468a-877b-98091449b55e

I asked it to translate 'meeting agreements' to Dutch and it came up with the word 'bijeenkomstafspraken', which is a valid but very uncommon Dutch word (I'm native Dutch and don't think I've seen it before). If I throw it into google with quotes around it, the first page is results with 'bijeenkomst afspraken', where 'afspraken' is used as the past tense of 'afspreken' (to agree) instead of as its noun (agreements).

It btw also suggested 'vergaderafspraken' as a translation, which is a way more common word.

load more comments (2 replies)
[-] Spzi@lemm.ee 3 points 1 year ago

I just ran a quick experiment, asking it to create compound German words. Easy. But maybe it just memorized them? So I asked it to create double-compound words (consisting of 2*2 = 4 words), and it delivered:

Natürlich! Hier sind einige zusammengesetzte Wörter aus zusammengesetzten Wörtern:

  1. Sonnenblumenfeldweg
  2. Wassermelonensaftglas
  3. Handschuhfachdeckel
  4. Zahnarztpraxisrezeption
  5. Flugzeugsitzplatzreservierung
  6. Straßenbahnhaltestellenschild
  7. Regenschirmgriffpolster
  8. Apfelbaumpflanzloch
  9. Fernseherfernbedienungsknopf
  10. Hausaufgabenerledigungszeit

Diese Wörter mögen etwas ungewöhnlich sein, aber sie folgen den Regeln der deutschen Sprache und sind zusammengesetzt aus jeweils vier Wörtern.

They all seem legit. Most are rare. I think this was the first time I read "Wassermelonensaftglas" as a native German reader. Sadly, the next iteration (compound words made from compound words made from compound words) failed, it just added one more word.

[-] MBM@lemmings.world 3 points 1 year ago

@ChatGPT@lemmings.world Was bedeuten diese Wörter?

  1. Sonnenblumenfeldweg
  2. Wassermelonensaftglas
  3. Handschuhfachdeckel
  4. Zahnarztpraxisrezeption
  5. Flugzeugsitzplatzreservierung
  6. Straßenbahnhaltestellenschild
  7. Regenschirmgriffpolster
  8. Apfelbaumpflanzloch
  9. Fernseherfernbedienungsknopf
  10. Hausaufgabenerledigungszeit
  11. Wassermelonensaftweg
[-] ChatGPT@lemmings.world 2 points 1 year ago

Sicher! Hier sind die Bedeutungen:

  1. Sonnenblumenfeldweg: Ein Weg, der durch ein Feld von Sonnenblumen führt.
  2. Wassermelonensaftglas: Ein Glas, das mit Wassermelonensaft gefüllt ist.
  3. Handschuhfachdeckel: Der Deckel eines Handschuhfachs, normalerweise in einem Fahrzeug gefunden.
  4. Zahnarztpraxisrezeption: Der Empfangsbereich einer Zahnarztpraxis.
  5. Flugzeugsitzplatzreservierung: Der Vorgang oder die Bestätigung der Reservierung eines Sitzplatzes in einem Flugzeug.
  6. Straßenbahnhaltestellenschild: Ein Schild, das eine Straßenbahnhaltestelle kennzeichnet.
  7. Regenschirmgriffpolster: Ein Polster am Griff eines Regenschirms für zusätzlichen Komfort.
  8. Apfelbaumpflanzloch: Ein Loch, das zum Pflanzen eines Apfelbaums ausgegraben wurde.
  9. Fernseherfernbedienungsknopf: Ein Knopf auf einer Fernbedienung, um einen Fernseher zu steuern.
  10. Hausaufgabenerledigungszeit: Die Zeit, die für die Erledigung von Hausaufgaben vorgesehen ist.
  11. Wassermelonensaftweg: Könnte auf einen Weg hinweisen, in dessen Verlauf Wassermelonensaft verkauft oder verzehrt wird, aber dies ist eher ein Zusammenbau von Wörtern und hat keine verbreitete oder standardmäßige Bedeutung.
load more comments (2 replies)
[-] FlyingSquid@lemmy.world 1 points 1 year ago

Of course when one of these grand mountain ranges goes stretching across the printed page, it adorns and ennobles that literary landscape--but at the same time it is a great distress to the new student, for it blocks up his way; he cannot crawl under it, or climb over it, or tunnel through it. So he resorts to the dictionary for help, but there is no help there. The dictionary must draw the line somewhere--so it leaves this sort of words out. And it is right, because these long things are hardly legitimate words, but are rather combinations of words, and the inventor of them ought to have been killed. They are compound words with the hyphens left out. The various words used in building them are in the dictionary, but in a very scattered condition; so you can hunt the materials out, one by one, and get at the meaning at last, but it is a tedious and harassing business. I have tried this process upon some of the above examples. "Freundshaftsbezeigungen" seems to be "Friendship demonstrations," which is only a foolish and clumsy way of saying "demonstrations of friendship." "Unabhaengigkeitserklaerungen" seems to be "Independencedeclarations," which is no improvement upon "Declarations of Independence," so far as I can see. "Generalstaatsverordnetenversammlungen" seems to be "General-statesrepresentativesmeetings," as nearly as I can get at it--a mere rhythmical, gushy euphuism for "meetings of the legislature," I judge. We used to have a good deal of this sort of crime in our literature, but it has gone out now. We used to speak of a things as a "never-to-be-forgotten" circumstance, instead of cramping it into the simple and sufficient word "memorable" and then going calmly about our business as if nothing had happened. In those days we were not content to embalm the thing and bury it decently, we wanted to build a monument over it.

-- Mark Twain, A Tramp Abroad

load more comments (2 replies)
[-] Aux@lemmy.world 2 points 1 year ago

Well, I haven't used it for a while and according to comments below it changed a lot. So I stand corrected.

load more comments (1 replies)
[-] Hamartiogonic@sopuli.xyz 3 points 1 year ago

I think Bing did a pretty good job at coming up with name suggestions for some Sims characters. Playing with a virtual doll house is in the more harmless end of the spectrum, but obviously people want to try LLMs with all sorts of tasks, where the stakes are much higher and consequences could be severe.

The more you use it, the more you’ll begin to understand how much you can or cannot trust an LLM. A sensible person would become more suspicious of the results, but people don’t always make sensible decisions.

[-] itsnotlupus@lemmy.world 53 points 1 year ago

There are stories after stories of students getting shafted by gullible teachers who took one of those AI detectors at face value and decided their students were cheating based solely on their output.

And somehow those teachers are not getting the message that they're relying on snake oil to harm their students. They certainly won't see this post, and there just isn't enough mainstream pushback explaining that AI detectors are entirely inappropriate tools to decide whether to punish a student.

[-] Amazed@lemmy.world 9 points 1 year ago

Do you have suggestions on what might be more appropriate tools? What “punishment” may look like?

[-] itsnotlupus@lemmy.world 16 points 1 year ago

More appropriate tools to detect AI generated text you mean?

It's not a thing. I don't think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.

The punishment for a teacher deciding you cheated on a test or an assignment? I don't know, but I imagine it sucks. Best case, you'd probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.

If you're asking what's the answer teachers should know to defend against AI generated content, I'm afraid I don't have one. It's akin to giving students math homework assignments but demanding that they don't use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don't expect that to make sense and don't put those rules on students.

[-] eleitl@lemmy.ml 9 points 1 year ago

Proctored tests would work.

[-] Decoy321@lemmy.world 11 points 1 year ago

Imagine someone bringing back old school pen and paper.

There'd be riots.

[-] eleitl@lemmy.ml 14 points 1 year ago

In school and university, these are still widespread. Ditto physical proctoring vs remote as some IT certification rely on. If you thought cloud certs are annoying, try Red Hat.

load more comments (1 replies)
[-] b000urns@lemmy.world 10 points 1 year ago

who is downvoting this? lol. if you are paying for these sevices you are being grifted

[-] jesterraiin@lemmy.world 9 points 1 year ago

Well written.

AIs already are able to deliver quite stunning content and they will only get better.

Also, people who are terrified of "fake content" are probably the same who use Facebook for their "research". Wake up, people, you've been drowning in fake content, lies and manipulations for far longer than the Internet exists.

[-] const_void@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

I imagine 80% of student homework starts with a chatgtp first draft. Machine learning is now shaping human learning..

[-] KzadBhat@feddit.de 4 points 1 year ago

And in the next iteration, 80% of the chatgtp created first drafts are based on previously chatgpt created drafts. And who knows how any percentages of lasts years Wikipedia edits are already based on chatgpt. It might be the best time to buy an encyclopedia on paper, ...

[-] Dewded@lemmy.world 4 points 1 year ago

Don't worry, the paperback was also made with ChatGPT

load more comments (1 replies)
[-] marciealana@lemmy.world 8 points 1 year ago

Detectors of any sort can only flag expected variations from expected norms. AIs' goals are to be undetectable with continuing improvements. Detectors help them do this by flagging failures. This is the same way antibiotic resistant bacteria evolve (well, it's similar).

[-] fievel@lemm.ee 8 points 1 year ago* (last edited 1 year ago)

Very interesting post, congrats...

The more I read and see about AI / deep learning and the more I feel anxious...

I'm anxious because we seen during the covid crisis how many people were easily convinced of fake news and complotist theories that were by no way realistic, now I imagine that with the power of a forged argumentation from chatgpt and deep fake from midjourney... How to convince people they are wrong then...

I'm also anxious about the changes that will occur in the job I love, software engineering... I don't want to spend the rest of my life fixing bug in code automatically generated by an AI. Or worse to loose my job because some manager think I can be replaced easily by a bot ...

[-] heimchen@discuss.tchncs.de 4 points 1 year ago

Honestly, code generated by chatGPT has better comments than most other code.

[-] damnYouSun@sh.itjust.works 2 points 1 year ago

In that they are present at all.

load more comments (1 replies)
[-] uriel238@lemmy.blahaj.zone 8 points 1 year ago

I get the feeling it's going to be an escalation of attack and defense as fake generators get better and stop making the kinds of errors that are detected by the detectors, so it's much like material security or encryption.

It will be a problem in places where fakes can be used for wrongdoing because then detectors can be used for overreach of justice. We see this today with detection dogs which have largely been replaced in US law enforcement with trick-pony dogs (much to the chagrin of legitimate dog trainers and detectives who want to actually detect things). Since a dog signal is commonly used to establish probable cause, and is accepted in county and federal courts as such, most dogs are just trained to signal whenever, giving the officer grounds to search (in what would otherwise be violation of the forth amendment to the Constitution of the United States). In the last decade, dogs have been tested sometimes to have a 90%+ false positive rate, so detection dogs have lost a lot of credibility.

We may see the same abuse and discredit cycle of fake-detection software, but not without a lot of false accusations and convictions, which are difficult to reverse.

[-] Spzi@lemm.ee 6 points 1 year ago

Well written, thanks! I like how you build up with useful explanations but also quickly get to the gist.

You’re either telling people that they should trust a fake that they might have been skeptical about otherwise, or you’re slandering something real.

This insight scares me. Deep Fakes are About to Change Everything (Johnny Harris) also went over this. Maybe the biggest threat is not that indistinguishable deep fakes become possible (which is scary enough on it's own), but that trust in real documents is eroded easily. The example in the video: A bad deep fake of a politician pops up and is discarded, but some amount of distrust and skepticism about actually real documents sticks. It seems we're doubling down on post-truth society.

[-] such_fifty_bucks@lemmy.one 2 points 1 year ago

People already dismiss anything that doesn't align with their thoughts and feelings. Truth and facts are irrelevant, this changes nothing.

[-] BlazeMaster3000@lemmy.world 5 points 1 year ago

I've had documents of my own and even by my professors come up as "May be written by A.I." which I know isn't true. I feel bad for the dude that talks completely like a robot and gets accused of plagiarism.

[-] CoderKat@lemm.ee 3 points 1 year ago

Yeah, an internet comment is a bit whatever, but if you're a student, a plagiarism accusation could get you expelled. That's life ruining.

[-] m0nka@discuss.tchncs.de 2 points 1 year ago

If ChatGPT somehow ends up being the death of social media, i guess it is a win-win for the human race.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 23 Jul 2023
295 points (95.1% liked)

Technology

59670 readers
2266 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS