Sandfall interactive used genAI to generate "placeholder" which did exist in the release version of the game and was removed within 1 week of release. Sandfall denied using AI when applying for the indie game awards who took back the award and is giving it to Blue Prince which many players are saying also contains genAI.
With that out of the way, fuck gen AI and NO, Blue Prince does not use GenAI, I've completed the game and I do think that they deserve goty for the quality of story and gameplay they delivered, although my hatred for the fanbase just keeps on increasing.
The Indie Game Awards explicitly states that they hate genAI and require the participants to mention beforehand if any GenAI was used in the development of the game, which Sandfall interactive, the developers of CO:E33 lied about/omited. Ever since the GotY nominations, the playerbase has this victim mentality that everyone hates their game and everyone is out to get them and are making false accusations against their game.
The subreddit has become an echo chamber (like most subreddits) And when Indie Game Awards announced they're rescinding the award, many people crawled out from rocks to give their justifications, "well every game uses AI, they just don't disclose it" or "it was just for internal development and they removed it before the launch, so it doesn't have AI anymore so its totally fine" (they didn't remove it before launch), "well autocomplete is also AI so disqualify others too", "who cares about some noname awards, they only want clickbait", "it wasn't an AI slop in the end so its fine", "they gave the award to blue prince who uses 90% AI" and many other bullshit.
First of all, Blue Prince doesn't use AI, this whole rumour was propagated by a single fkin article from TheEscapist who cited nonsense without any actual sources, and now after receiving backlash have issued an apolog, , but you will not hear about this from the e33 fanbase who'll continue saying how Blue Prince uses AI.
My issue is not only with genAI being used by sandfall, but also them not being transparent about it. They only explained it further when they got found out. Not only that, they gave the whole speech at TGA about creativity but using genAI is literally spitting in the face of creativity.
They said they replaced the AI "placeholders" so we don't really know the true extent of how much genAI was used in the development, which is also pretty bad in my opinion. Many people also try to defend the use because "AI use is fine because the final product is not an AI slop", these people are missing the entire point, its never about the final product, same as ai "art", we dont hate it ai "art" because it looks weird, we hate it because its unethical and trained on stolen work.
Its just hypocrisy, if it was some studio like ubisoft or ea that did this, these people would be marching with their pitchfork in hand, but since its their beloved game, "its alright because the game is actually good". And some other statements like E33 is so far above other games that they had to disqualify it so others can have a chance at winning.
Programmer art, formulaic procgen, free assets with permissive licenses, or just going without assets for a given thing also replaces that labor. I genuinely do not believe that there can be a standard that small artists working on a small and precarious budget must hire additional artists as some kind of make-work program. Once you reach a corporate level where there are executives and shareholders making bank and teams dedicated to supporting this or that and middle managers, then the whole outfit should have to be unionized at the least and there there is an obligation to hire artists as a make-work program because it's bleeding money on useless things like shareholders and marketers and executives and middle managers so they could at least cut some more productive workers in on the deal too.
Like a lot of indie games are basically art projects some programmer is doing on the side while working a day job, that's not even at the small business scale of things. If they actually have revenue and funding, then they probably have money for artists and should have one who can do important assets, but there's still no obligation to keep scaling up to provide the sort of space-filling-noise that generative AI can make sort of ok-ish.
And not for nothing, but you know how the Luddites were like "damn, these mechanical looms would be really cool if we, the skilled workers, owned them ourselves and could use them to benefit us, but instead it's some rich bastard owning them and fucking us all over" and then they did industrial sabotage against the capitalist owned machinery instead of against machinery in general? Labor saving and labor replacing capital is bad in the hands of capitalists, because everything is bad in the hands of capitalists. "This machine replaces additional workers" is perfectly fine in the hands of workers themselves, and it can't be considered tainted simply because capitalists use similar machines for harmful purposes.
I think there is plenty of indie devs that would rather be caught dead than using AI assets.
Honestly if you doing the artisan thing, you are the last one who can afford to be caught cutting corners like that. This is the reason why we have the genai drama at the indie game awards rather than other award shows.
Which is ridiculous, and that's my point. Like people are looking at the shitty chatbot bullshit that huge corporations are doing and conflating any and every sort of use of generative AI with that, and because they can't lash out at the people who are actually doing bad things (huge corporations liquidating workers and enshittifying as hard as they can) they lash out at hobbyists and small devs instead.
And that's fucked up, it's turned into this weird moral purity frenzy where it's just become accepted that evil corporations will do as they will and so it is the isolated individual who must be chastised and punished for this impurity in their place.
Which is why I've got to stress that open source, locally-run software and models really are fine. I mean their output is still completely useless for almost all purposes because it's too random and uncontrollable, but there are ways to make it more controlled to the point that it might be suitable for cleaning up a sketch or making what is basically just visual noise to fill space. Because that's fundamentally what it is: a tool that can either transform existing information in a sort-of-ok fashion, or just make vaguely controllable empty noise.
The real problem are the chatbots and the absolute trainwreck that is anything they touch. Like "vibe code", which I've had the misfortune of seeing in practice and it's fucking awful, just pure nightmare nonsense spinning out something that somehow executes but is just as inscrutable as the abomination that made it.
Drawing the moral line between local and cloud is nonsensical. There might be practical reasons for it if you think about genai as the tool for industrial espionage that it is but as a moral line it makes no sense.
With a proprietary corporate model you're paying the slop factory for gacha pulls based on vibes. With a local open source model you're not paying a huge corporation for the privilege of playing their shitty slot machine and you at least have access to a toolkit that could be used in a more elaborate and targeted manner as part of a creative process that might yield an end product that's sort of ok sometimes and for some purposes.
It's like the difference between using a 3d printer and going out and buying capsules from a literal gacha machine - both eschew the traditional process of creating such an object, but one is much more involved and controlled and the other is literally paying money to gamble on buying something from a corporation.
This assumes a level of buy-in into the nonsense that I'm not prepared to entertain.
Depends on the goals, if you are looking for cheap figurines you are probably better served at the gacha machine. I don't think the analogy works.
How do you not see a distinction between paying Sam Altman so his world eating data centers can spew random bullshit at you, and operating a local toolkit that can do that but that can also be more tightly controlled and refined as, you know, an actual tool?
Genuinely, do you know even the first thing about these machines? Like I hate the hobbyist AI community and think a good 90% of them should be redacted for one thing or another, but the current lineup of tools is both fascinating and horrifying in terms of what it can churn out (like Z Image, the latest open source image model out of China, which is roughly on par with the heavier models that corporations are selling gens on, except it runs smoother than SDXL and fits into a sub 16 GB VRAM footprint even on AMD cards; it's also significantly more controllable just from prompting alone and doesn't make the internal consistency errors that other models do, at least not as egregiously. And that's horrifying, for all that it's still deeply flawed). It's still mostly useless when it comes to producing things from random noise, but not every single asset actually needs to be hundreds or thousands of dollars worth of labor when its just there to fill space or convey the sorts of things you can actually control into it.