230
submitted 3 months ago by ArcticDagger@feddit.dk to c/science@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] uriel238@lemmy.blahaj.zone 4 points 3 months ago

This seems to logically follow. The copy of a copy of a copy paradigm. We train AI on what humans like. By running stuff back through the trainig data, we're adding noise back in.

To be fair, we already add noise, in that human art has its own errors, which we try to filter out using additional data featuring more of what we want and less of what we don't want.

this post was submitted on 26 Jul 2024
230 points (96.7% liked)

science

14595 readers
287 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS