288
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Jun 2024
288 points (100.0% liked)
Technology
37750 readers
299 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Not OP but familiar enough with open source diffusion image generators to be able to chime in.
Now I'd argue that being an artist comes down to being able to envision something in your mind's eye and then reproduce it in the real world using some medium, whether it's a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I'm thinking about it.
The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you're after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.
This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There's a model for that, want something great at landscapes? There's a different one for that. Surely you can use an all-purpose model for everything, but some models simply don't have the training to align to your vision, so you either choose to live with 'close enough' or you start downloading new options, comparing them with your existing work flow, etc.
There's certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it's automatically "not art".
So if I walked into a restaurant that specialized in a certain cuisine (choosing the right one out of hundreds is a skill, right?) and wrote down a list of ingredients, and the restaurant made me a meal with those ingredients according to however the restaurant functions (nobody can see into the kitchen, after all), does this make me a chef?
Is there any chance you're at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.
Jokes aside, I see the comparison you're making and it's not a bad one. I'd counter by giving the example of a menu - when you get to a restaurant you're given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let's say that a meal on the menu is like the starting point of the workflow I described.
Based on that you have an idea of what the output will be when you order - but let's say you don't like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.
Certainly you're not a 'chef', but if the dish you design is both bespoke and previously unimaginable, I'd argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.
Not exactly the same but I don't think it's entirely different.
You keep using the word "vision", but I have a hard time understanding how an AI artist has a vision equivalent to that of a traditional artist based on the explanation you've provided. It still sounds they are just cycling through AI generated options until they find something they like/that looks good. That is not the same as seeing something in your mind and then manually recreating that to the best of your ability.
Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.
Or to step even further from the actual act of creation, is a creative director an artist? There's certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.
You're sort of stepping around the issue here. Are you confirming that AI art is about cycling through options blind until you stumble across something you like?
No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They're not just typing in "make me a pretty image" and then refreshing a lot.
The only explanation I've received so far sounded exactly like this, just with more steps to disguise the underlying process.
It isn't. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that's just like a photographer taking multiple photos of a scene they've directed to catch the right flutter of hair or a dress or a creative director saying "give me three versions of X".
Ready to get back to my original questions?