151

An in-depth police report obtained by 404 Media shows how a school, and then the police, investigated a wave of AI-powered “nudify” apps in a high school.

you are viewing a single comment's thread
view the rest of the comments
[-] Int_not_found@feddit.de 21 points 8 months ago* (last edited 8 months ago)

1.) Germany has civil laws giving a person depicted similar rights as the creator of an image. It is also an criminal offense publishing images, that are designt to damage an persons public image, Those aren't perfect, mainly because there wording is outdated, but the more general legal sentiment is there.

2.) The police traces the origin through detective work. Social Cycles in schools aren't that huge so p2p distribution is pretty traceable & publishing sites usually have ip-logs.

A criminal court decides the severity of the punishment for the perpetrator. A civil court decides about the amount of monetary damages, that were caused and have to be compensated by the perp or his/her legal guardian.

People simply forwarding such material can also be liable (since they are distributing copyrighted material) & therefore the distribution can be slowed or stopped.,

3.) It gives the police a reason to investigate, gives victims a tool to stop distribution & is a way to compensate the damages caused to victims

[-] iAvicenna@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

All they have to do is make public examples of some kids and parents and that will be an end to this in a couple years.

[-] General_Effort@lemmy.world 0 points 8 months ago* (last edited 8 months ago)

1.) Germany has civil laws giving a person depicted similar rights as the creator of an image. It is also an criminal offense publishing images, that are designt to damage an persons public image, Those aren’t perfect, mainly because there wording is outdated, but the more general legal sentiment is there.

Germany also has laws criminalizing insults. You can actually be prosecuted for calling someone an asshole, say. Americans tend to be horrified when they learn that. I wonder if feelings in that regard may be changing.

AFAIK, it is unusual, internationally, that the English legal tradition does not have defamation (damaging someone's reputation/public image) as a criminal offense, but only as a civil wrong. I think Germany may be unusual in the other direction. Not sure.

2.) The police traces the origin through detective work. Social Cycles in schools aren’t that huge so p2p distribution is pretty traceable & publishing sites usually have ip-logs.

Ok, the police would interrogate the high-schoolers and demand to know who had the pictures, who made them, who shared them, etc... That would certainly be an important life lesson.

The police would also seize the records of internet services. I'd think some people would have concerns about the level of government surveillance here; perhaps that should be addressed.

How does that relate to encryption, for example? Some services may feel that they avoid a lot of bother and attract customers by not storing the relevant data. Should they be forced?

3.) It gives the police a reason to investigate, gives victims a tool to stop distribution & is a way to compensate the damages caused to victims

That's what you want to happen. It does not consider what one would expect to actually happen. It's fairly common for people of high school age to insult and defame each other. Does the German police commonly investigate this?

[-] Int_not_found@feddit.de 5 points 8 months ago* (last edited 8 months ago)

Germany also has laws criminalizing insults. You can actually be prosecuted for calling someone an asshole, say. Americans tend to be horrified when they learn that. I wonder if feelings in that regard may be changing.

I don't care about the feelings of Americans reading this. Tbh

Germany is a western liberal democracy, same as the US.

On the other hand I'm horrified, that you seem to equate a quick insult with Deepfake-Porn of Minors.

The police would also seize the records of internet services. I'd think some people would have concerns about the level of government surveillance here; perhaps that should be addressed.

Arguably the unrestricted access of government entities to this kind of data is higher in the US then the EU.

How does that relate to encryption, for example? Some services may feel that they avoid a lot of bother and attract customers by not storing the relevant data. Should they be forced?

There are many entities that store data about you. Maybe the specific service doesn't cooperate. But what about the server-hoster, maybe the ad-network, maybe the app-store, certainly the payment processor.

If the police can layout how that data can help solve the case, providers should & can be forced by judges to give out that data to an certain extent. Both in the US and the EU

Does the German police commonly investigate this?

Insults? No, those are mostly a civil matter not a criminal one

(Deepfake-) Porn of Minors? Yes certainly

[-] General_Effort@lemmy.world 1 points 8 months ago

What's with the downvotes? Lemmy is usually pretty negative on the whole data gathering thing, I thought. Shouldn't I have brought this up? I don't get it.

[-] RememberTheApollo@lemmy.world -4 points 8 months ago

Digitally watermark the image for identification purposes. Hash the hardware, MAC ID, IP address, of the creator and have that inserted via steganography or similar means. Just like printers use a MIC for documents printed on that printer, AI generated imagery should do the same at this point. It’s not perfect and probably has some undesirable consequences, but it’s better than nothing when trying to track down deepfake creators.

[-] abhibeckert@lemmy.world 4 points 8 months ago* (last edited 8 months ago)

The difference is it costs billions of dollars to run a company manufacturing printers and it’s easy for law enforcement to pressure them into not printing money.

It costs nothing to produce an AI image, you can run this stuff on a cheap gaming PC or laptop.

And you can do it with open source software. If the software has restrictions on creating abusive material, you can find a fork with that feature disabled. If it has stenography, you can find one with that disabled too.

You can tag an image to prove a certain person (or camera) took a photo. You can’t stop people from removing that.

[-] General_Effort@lemmy.world 2 points 8 months ago

Honestly, I can't tell if this is sarcasm or not.

[-] yamanii@lemmy.world 3 points 8 months ago

Samsung's AI does watermark their image at the exif, yes it is trivial for us to remove the exif, but it's enough to catch these low effort bad actors.

[-] General_Effort@lemmy.world 1 points 8 months ago

I think all the main AI services watermark their images (invisibly, not in the metadata). A nudify service might not, I imagine.

I was rather wondering about the support for extensive surveillance.

this post was submitted on 16 Feb 2024
151 points (94.2% liked)

Technology

59038 readers
3147 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS