363
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Apr 2025
363 points (96.7% liked)
Technology
69351 readers
3170 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
I would like to take a crack at this. There is this recent trend going around with ghiblifying one's picture. Its basically converting a picture into ghibli image. If you had trained it on free sources, this is not possible.
Internally an LLM works by having networks which activate based on certain signals. When you ask it a certain question. It creates a network of similar looking words and then gives it back to you. When u convert an image, you are doing something similar. You cannot form these networks and the threshold at which they activate without seeing copyrighted images from studio ghibli. There is no way in hell or heaven for that to happen.
OpenAI trained their models on pirated things just like meta did. So when an AI produces an image in style of something, it should attribute the person from which it actually took it. Thats not whats happening. Instead it just makes more money for the thief.