I really hope their next game smashes it too after this statement. Has potential to pave the way for the rest of the industry.
If there was an AI that licensed every bit of art/code/etc that it trained on, then I think I would be fine if they used it. BUT, I’d never think their final product was ever a more than just a madlibs of other people’s work, cobbled together for cheap commercial consumption. My time is worth something, and I’m not spending a minute of it on AI generated crap when I could be spending it on the product of a true author, artist, coder, craftsman. They deserve my dollar, not the AI company and their ai-using middle man who produced shit with it.
I mean they mostly used it for textures, right? Those are often generated from a combination of noise and photography, it's not like they were building the game out of lego bricks of other people's art.
I don't see how it's significantly different than sampling in music, it's just some background detail to enhance the message of the art.
Obviously modern AI is a nightmare machine that cannot be justified, but I could imagine valid artistic uses for the hypothetical AI you described.
When you sample in music, you get the original artist's permission or you get fucking sued. If the AI used were trained on a licensed library catalogue, then sure. Media companies historically would buy sample licenses to use for their sound effects in movies, video games, etc. so AI could essentially just do that, but put the encyclopedia of samples in a blender of training to modulate that shit to make something somewhat "new" to be used. Original artists get royalties, users get something customized without having to hire sounds engineers to make those adjustments, and consumers get good products.
Yeah, that's basically what I was trying to imagine. It's absolutely not what contemporary AI is, but it's closer to how I think the technology should be used.
I'm not surprised they ultimately felt like GenAI isn't useful to what they're trying to do. Game dev has known about this type of generation for a while now (see https://en.wikipedia.org/wiki/Model_synthesis and the sources linked at the bottom) and it takes a lot of human effort to curate both the training data and the model weights to end up with anything that feels new and meaningful.
If I shuffle a deck of 52 cards, there is a high chance of obtaining a deck order that has never occurred before in human history. Big whoop. GenAI is closer to sexy dice anyways - the "intelligent work" was making sure the dice faces always make sense when put together and you don't end up rolling "blow suck" or "lips thigh".
It's very impressive that we're able to scale this type of apparatus up to plausibly generate meaningful paragraphs, conversations, and programs. It's ridiculous what it cost us to get it this far, and just like sexy dice and card shuffling I fail to see it as capable of replacing human thought or ingenuity, let alone expressing what's "in my head". Not until we can bolt it onto a body that can feel pain and hunger and joy, and we can already make human babies much more efficiently than what it takes to train an LLM from scratch (and they have personhood and thus rights in most societies around the world).
Even the people worried about "AI self-improving" to the point it "escapes our control" don't seem to be able to demonstrate that today's AI can do much more than slow us down in the long run; this study was published over 5 months ago, and they don't seem to have found much since then.
Just to point out, LLMs are genAI. Lots of code editors provide code suggestions similar to autocorrect/text suggestions using AI. Strictly I doubt any game is made without AI. Not to say it can't be deliberately avoided, but given the lack of opposition to GPT and LLMs I don't see it being considered for avoidance in the same way as art.
So Awards with constraints on "any AI usage in development" probably disqualifies most modern games.
Code analysis and suggestion tools in many professional IDEs are not powered by LLMs, in the IDEs I use, there's an available LLM that I've disabled the plugin for (and never paid for so it did nothing anyways). LLMs are simply too slow for the kind of code completion and recommendation algorithms used by IDEs and so using them is not "using genAI"
Uh... Sorry but no, LLMs are definitely fast enough. It works just like auto complete, except sometimes it's a genius that pulls the next few lines you were about to write out of the ether, and sometimes it makes up a library to do something you never asked for
Mostly it works about as well as code completion software, but it'll name variables much better
I believe you that genAI is being used for code suggestions, but I wouldn’t call it a genius.
This is anecdotal but over the last couple of years I’ve noticed Visual Studio’s autocomplete went from suggesting exactly what I wanted more often than not to just giving me hot garbage today. Like even when I’m 3-4 lines in to a very obvious repeating pattern it’ll suggest some nonsense variation on it that’s completely useless. Or just straight making up variable and function names that don’t exist, or suggesting inputs to function calls that don’t match the signature. Really basic stuff that any kind of rules-based system should have no problem with.
I wouldn't call it a genius either, it's just all over the place. Sometimes it's scary good and predicts your next move, most of the time it's just okay, sometimes it's annoyingly bad
My last job was on a fairly large typescript codebase (few hundred Klocs) which we started some time before LLMs were a thing. While we weren't into academic engineering patterns and buzzwords, we were very particular in maintaining consistent patterns across the codebase. The output of Copilot, even with early models which were far from today's standards, was often scarily accurate. It was far from genius but i'm still chasing that high to this day, to me it really indicated that we had made this codebase readable and actionable even by a new hire.

"Ai" does not exist.
There is no such thing. It does not exist at presence. Full stop.
Genetic algorithms and complex algorithms are neat. They're useful tools. They're not "ai", they're algorithms. It's fine.
Large X models and diffusion models are bullshit machines. They were invented as bullshit machines. They are scams. Their evangelists are scammers. They do not 'generate' except in the way aspark gap jammer 'generates'. They're noise machines.
You do not know what the fuck you're talking about. You are a cultist. Go back to sexting with ELIZA.
Autocomplete in IDEs use LSP. It's a rite of passage to manually setup your own if you use vim.
They are most often just a local database server and client written in C++
Highly recommend you code your own ngl it's a really fun project.
VSCode autocomplete is literal AI now, they don’t have intellisense
Others will follow, making “no AI” software really difficult to prove
You can literally customize the vscode intellisense right now...
Right now, but we can all see where it’s heading
Then you need to make sure 200 devs disable all AI features or get disqualified from awards with “no AI” clauses
Games
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
- News oriented content (general reviews, previews or retrospectives allowed).
- Broad discussion posts (preferably not only about a specific game).
- No humor/memes etc..
- No affiliate links
- No advertising.
- No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
- No self promotion.
- No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
- No politics.
Comments.
- No personal attacks.
- Obey instance rules.
- No low effort comments(one or two words, emoji etc..)
- Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities: