219

Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski's style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski's art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

you are viewing a single comment's thread
view the rest of the comments
[-] RygelTheDom@midwest.social 48 points 1 year ago

What blurry line? An artist doesn’t what his art stolen from him. Seems pretty cut and dry to me.

[-] KoboldCoterie@pawb.social 22 points 1 year ago

I don't fully understand how this works, but if they've created a way to replicate his style that doesn't involve using his art in the model, how is it problematic? I understand not wanting models to be trained using his art, but he doesn't have exclusive rights to the art style, and if someone else can replicate it, what's the problem?

This is an honest question, I don't know enough about this topic to make a case for either side.

[-] delollipop@beehaw.org 11 points 1 year ago* (last edited 1 year ago)

Do you know how they recreated his style? I couldn’t find such information or frankly have enough understanding to know how.

But if they either use his works directly or works created by another GAI with his name/style in the prompt, my personal feeling is that would still be unethical, especially if they charge money to generate his style of art without compensating him.

Plus, I find that the opt-out mentality really creepy and disrespectful

“If he contacts me asking for removal, I'll remove this.” Lykon said. “At the moment I believe that having an accurate immortal depiction of his style is in everyone's best interest.”

[-] SweetAIBelle@kbin.social 8 points 1 year ago

Generally speaking, the way training works is this:
You put together a folder of pictures, all the same size. It would've been 1024x1024 in this case. Other models have used 768z768 or 512x512. For every picture, you also have a text file with a description.

The training software takes a picture, slices it into squares, generates a square the same size of random noise, then trains on how to change that noise into that square. It associates that training with tokens from the description that went with that picture. And it keeps doing this.

Then later, when someone types a prompt into the software, it tokenizes it, generates more random noise, and uses the denoising methods associated with the tokens you typed in. The pictures in the folder aren't actually kept by it anywhere.

From the side of the person doing the training, it's just put together the pictures and descriptions, set some settings, and let the training software do its work, though.

(No money involved in this one. One person trained it and plopped it on a website where people can download loras for free...)

load more comments (20 replies)
load more comments (23 replies)
load more comments (51 replies)
this post was submitted on 30 Jul 2023
219 points (100.0% liked)

Technology

37699 readers
282 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS