363
submitted 1 year ago* (last edited 1 year ago) by btp@kbin.social to c/privacy@lemmy.ml

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

you are viewing a single comment's thread
view the rest of the comments
[-] mindbleach@sh.itjust.works 11 points 1 year ago

Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.

Oh no.

[-] PoliticalAgitator@lemm.ee 12 points 1 year ago

Now delete your posts from ChatGPTs memory.

[-] JonEFive@midwest.social 3 points 1 year ago

Delete that comment you just posted from every Lemmy instance it was federated to.

[-] PoliticalAgitator@lemm.ee 5 points 1 year ago

I consented to my post being federated and displayed on Lemmy.

Did writers and artists consent to having their work fed into a privately controlled system that didn't exist when they made their post, so that it could make other people millions of dollars by ripping off their work?

The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.

Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.

[-] JonEFive@midwest.social 1 points 11 months ago* (last edited 11 months ago)

While you make a valid point here, mine was simply that once something is out there, it's nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn't have a say. Even though you initially consented, you can't guarantee that any site will fulfill a request to delete.

Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there's something really tricky about these AI systems. The training data isn't discrete once the model is built. You can't just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they'd have to rebuild it every time a license agreement changed.

That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it's information, but it isn't simple or easy. And it's even more difficult with visual works.

There's an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won't do it without incentive like millions of dollars in copyright lawsuits.

[-] mindbleach@sh.itjust.works 1 points 1 year ago

Deleting this comment won't erase it from your memory.

Deleting this comment won't mean there's no copies elsewhere.

[-] archomrade@midwest.social 2 points 1 year ago

Deleting a file from your computer doesn't even mean the file isn't still stored in memory.

Deleting isn't really a thing in computer science, at best there's "destroy" or "encrypt"

[-] mindbleach@sh.itjust.works 1 points 1 year ago

Yes, that's the point.

You can't delete public training data. Obviously. It is far too late. It's an absurd thing to ask, and cannot possibly be relevant.

[-] PoliticalAgitator@lemm.ee 0 points 1 year ago

And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?

[-] DontMakeMoreBabies@kbin.social -3 points 1 year ago

Or maybe folks should think before putting something into the world they can't control?

[-] PoliticalAgitator@lemm.ee 12 points 1 year ago

Yeah it's their fault for daring to communicate online without first considering a technology that didn't exist.

[-] DarkDarkHouse@lemmy.sdf.org 8 points 1 year ago

Sooner or later these models will be trained with breached data, accidentally or otherwise.

[-] JonEFive@midwest.social 1 points 1 year ago

This whole internet thing was a mistake because it can't be controlled.

[-] joshcodes@programming.dev 0 points 1 year ago

User name checks out

this post was submitted on 29 Nov 2023
363 points (98.9% liked)

Privacy

32120 readers
487 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS