54
submitted 1 week ago by tonytins@pawb.social to c/games@lemmy.world

A user asked on the official Lutris GitHub two weeks ago "is lutris slop now" and noted an increasing amount of "LLM generated commits". To which the Lutris creator replied:

It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn't have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn't AI that laid off thousands of employees, it's deluded executives who don't understand that this tool is an augmentation, not a replacement for humans.

I'm not a big fan of having to pay a monthly sub to Anthropic, I don't like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I'm not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

top 50 comments
sorted by: hot top controversial new old
[-] nialv7@lemmy.world 12 points 1 week ago

you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don't harass the developer.

[-] TrickDacy@lemmy.world 3 points 1 week ago

You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

[-] Zos_Kia@jlai.lu 8 points 1 week ago

It's typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

I've seen it play a few times already. A toxic community will take a dev who's already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy... Maybe add a little light harassment on the side, as a treat. It's a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

The fact that the thread is titled "is lutris slop now" is a clear indication that the intention of the poster wasn't to contribute anything constructive but to attack the dev and put them on their back foot.

[-] TrickDacy@lemmy.world 6 points 1 week ago

I see your point. I might also have responded poorly to that, on some level at least.

[-] Zos_Kia@jlai.lu 5 points 1 week ago

Yeah same. I'd like to think i'd answer "I'll use AI, if you don't like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.". But realistically there's a high chance it catches me on a bad day and i get stupid.

[-] aksdb@lemmy.world 1 points 1 week ago

Trolling? They gave a pretty good answer explaining their reasoning.

[-] TrickDacy@lemmy.world 1 points 1 week ago

I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

Seems pretty obvious to me that they knew this wouldn't go over well. It was inflammatory by design.

[-] aksdb@lemmy.world 2 points 1 week ago

Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

load more comments (8 replies)
[-] southsamurai@sh.itjust.works 12 points 1 week ago

Yeah, this is actually one of the good things a technology like this can do.

He's dead right, in terms of slop, if it's someone with training and experience using a tool, it doesn't matter if that tool is vim or claude. It ain't slop if it's built right.

[-] echodot@feddit.uk 5 points 1 week ago* (last edited 1 week ago)

It ain't slop if it's built right.

Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn't make a mistake then how is it a useful product?

He says it helps him get work done he wouldn't otherwise do, but how's that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain't matching on this one.

[-] p03locke@lemmy.dbzer0.com 5 points 1 week ago* (last edited 1 week ago)

the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

When was the last time you coded something perfectly? "If I have to nanny you to make sure you don't make a mistake, then how are you a useful employee?" See how that doesn't make sense. There's a reason why good development shops live on the backs of their code reviews and review practices.

The math ain’t matching on this one.

The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.

There's also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there's code there. I still have to review it, point out some mistakes, and then go back and refill my drink.

And there's so much you can customize with personal rules. Don't like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.

All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn't believe how many people can't rubberduck and explain proper concepts to people, much less LLMs.

LLMs are patient. They don't give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.

[-] southsamurai@sh.itjust.works 3 points 1 week ago

Well, I'm not a code monkey, between dyslexia and an aging brain. But if it's anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don't really have to pore over every single line. Only time that's needed is when something is broken. Otherwise, you're scanning to keep oversight, which is no different than reviewing a human's code that you didn't write.

Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren't logical by nature.

If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it'll get that review even if the project maintainers slip up.

And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.

Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).

My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.

Maybe I'm too far behind the various languages, but I really can't see it being a massively harder proposition to scan and edit the output of an llm.

load more comments (4 replies)
[-] Cyv_@lemmy.blahaj.zone 11 points 1 week ago* (last edited 1 week ago)

I mean, I get if you wanna use AI for that, it's your project, it's free, you're a volunteer, etc. I'm just not sure I like the idea that they're obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I'd still prefer transparency.

[-] stsquad@lemmy.ml 4 points 1 week ago

I expect because it wasn't a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

[-] tonytins@pawb.social 3 points 1 week ago

I tried fitting AI into my workloads just as an experiment and failed. It'll frequently reference APIs that don't even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

[-] aloofPenguin@piefed.world 2 points 1 week ago

I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn't a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn't exist, even when I let it do web search (NOT via my browser). This was a few years ago.

[-] p03locke@lemmy.dbzer0.com 2 points 1 week ago

This was a few years ago.

That's 50 years in LLM terms. You might as well have been banging two rocks together.

[-] Vlyn@lemmy.zip 2 points 1 week ago

You might genuinely be using it wrong.

At work we have a big push to use Claude, but as a tool and not a developer replacement. And it's working pretty damn well when properly setup.

Mostly using Claude Sonnet 4.6 with Claude Code. It's important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it's extremely rare that it hallucinates something that doesn't exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

[-] p03locke@lemmy.dbzer0.com 3 points 1 week ago* (last edited 1 week ago)

Agreed, I don't understand people not even giving it a chance. They try it for five minutes, it doesn't do exactly what they want, they give up on it, and shout how shit it is.

Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

It's like handing your 90-year-old grandpa the Internet, and they don't know what the fuck to do with it. It's so infuriating.

Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just "me needs problem solvey, go do fix thing!"

[-] Vlyn@lemmy.zip 2 points 1 week ago

It's not really that simple. Yes, it's a great tool when it works, but in the end it boils down to being a text prediction machine.

So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)

load more comments (1 replies)
[-] Scrollone@feddit.it 2 points 1 week ago

Yeah I mean. It's not like AI can think. It's just a glorified text predictor, the same you have on your phone keyboard

load more comments (5 replies)
load more comments (3 replies)
[-] XLE@piefed.social 2 points 1 week ago

Considering the amount of damage AI has done to well-funded projects like Windows and Amazon's services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

[-] Fizz@lemmy.nz 2 points 1 week ago

I'm the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

[-] HotsauceHurricane@lemmy.world 11 points 1 week ago

Somehow hiding the code feels worse than using the code. This whole thing is yuck.

[-] Holytimes@sh.itjust.works 7 points 1 week ago

Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it's true or not.

I blame fucking no one for hiding the fact.

This is on the users not the dev. The users are fucking animals and created this very problem.

Blaming the wrong people and attacking them is the yuck.

Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.

[-] Ephera@lemmy.ml 3 points 1 week ago

Yeah, management wants us to use AI at $DAYJOB and one of the strategies we've considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.

Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.

We haven't actually started doing these separate commits, because it's cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.

[-] darkangelazuarl@lemmy.world 7 points 1 week ago

If he's using like an IDE and not vibe coding then I don't have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn't even write this comment I just wrote without asking AI for assistance.

[-] Ephera@lemmy.ml 2 points 1 week ago

Yeah, that's my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.

But that was pretty much always true. We still did not slap another implementation onto the side, because it's horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
And it's horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.

And the worst part is that they don't even have an answer to those concerns. They know that it's going to bite us into the ass in the near future. They're on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.

And when it comes to actually maintaining that generated code, they'll be the hardest to motivate, because that isn't as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don't know any better how it actually works. Nevermind that they're also less sharp in general, because they've outsourced thinking.

load more comments (4 replies)
[-] SuspciousCarrot78@lemmy.world 6 points 1 week ago

If he'd just forgone that last paragraph...

[-] magikmw@piefed.social 6 points 1 week ago

Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I'm not surprised lutris' maintainer went off like they did, the issue is not made with good faith.

[-] Zos_Kia@jlai.lu 2 points 1 week ago

Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It's dangerous to catch their attention, as once they have you they'll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn't 100% satisfy them. Pure vicious entitlement.

I'd sooner have a drink with a salesman from OpenAI than with one of them.

[-] QuandaleDingle@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Just, what kind of pleasure can one derive from harming these projects? It's so frigging weird, man.

load more comments (1 replies)
[-] adeoxymus@lemmy.world 4 points 1 week ago

Tbh I agree, if the code is appropriate why care if it’s generated by an LLM

[-] deadcade@lemmy.deadca.de 3 points 1 week ago

It's still made by the slop machine, the same one that could only be created by stealing every human made artwork that's ever been published. (And this is not "just one company", every LLM has this issue.)

Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

If the developer isn't able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

[-] bookmeat@fedinsfw.app 2 points 1 week ago

A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

[-] wirelesswire@lemmy.zip 2 points 1 week ago

Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

load more comments (1 replies)
load more comments (7 replies)
load more comments (10 replies)
load more comments (11 replies)
[-] Omega_Jimes@lemmy.ca 2 points 1 week ago

I don't support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don't have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.

If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.

[-] super_user_do@feddit.it 2 points 1 week ago

Im.not against the usage of AI in general. The problem only comes up if the human literally relies on it, but if you are using it for learning, quickly scrolling documentation or make code in a critical manner and with years of normal programming experience, that's fine. Bro had 30 years of development experience so I guess he knows what good code looks like

[-] Armok_the_bunny@lemmy.world 4 points 1 week ago

Even then, it feels dishonest to hide when such a historically unreliable tool is being used.

[-] Katana314@lemmy.world 2 points 1 week ago

To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

I can believe him about there being a sweet spot; where it's not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn't had enough scrutiny.

Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people's lives. It's just sad that in 99.9% of cases, we're not anywhere near that perfect world.

I don't totally blame the dev for defending his use of AI backed by industry experience, if he's still careful about it. But I also don't blame people who don't trust it. It's kind of his call, and if the avoidance of AI is important enough to you, I'd say fork it. I think it's a small red flag, but not nearly enough of one for me to condemn the project.

load more comments (4 replies)
[-] bold_omi@lemmy.today 2 points 1 week ago

AI is immeasurably shitty, both in terms of code quality and of morality. The fact that this developer is hiding his use of it from his community is despicable. I will never use Lutris again, nor will I allow PRs from this developer on any repos of mine. Fuck AI, and fuck strycore (deceitful bastard and Lutris "developer").

load more comments
view more: next ›
this post was submitted on 12 Mar 2026
54 points (98.2% liked)

Games

47271 readers
282 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don't share it here, there are other places to find it. Discussion of piracy is fine.

We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform

By type

By games

Language specific

founded 2 years ago
MODERATORS