this post was submitted on 22 Aug 2025
77 points (100.0% liked)
technology
23926 readers
68 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
I really don't see what new points your comment adds that we haven't already discussed. I've repeatedly pointed out that what I find LLMs to be effective at is doing surface level things, or building out isolated components that can be reasoned about and tested in isolation. I really don't see how the points you raise apply here.
I think I was pretty clear what I meant. I was talking about coupling via shared mutable state. This is the aspect that I find makes code difficult to reason about.
You're talking about implementation details here. I'm talking about the semantics of how you use and reason about these components. Once again, what you really care about is scope. What data is passed to a component and what the ownership of this data is.
Hooks are obviously not pure, but they don't inherently prevent reasoning about your components or testing them in isolation.
Frankly, I'm not even sure what your criticisms are specifically. You've made some assertions about dependency injection, memoiztion, and so on, and claimed that this is somehow problematic when you have LLMs generate code. I've personally not run into these issues, so this argument isn't really making a lot of sense to me.
The type of side effects we're talking about matters. The problematic kind are the ones that result in coupling via implicit shared mutable state as a result of passing and modifying references to shared data.
What I'm saying is that avoiding shared mutable state is a prerequisite for modularity. You certainly can create coupling in other ways, but if you have shared state then you're inherently coupled from the start.
What you said was
You didn't say which tools you used, you didn't say when you used them, or how much time you've actually invested in learning and becoming productive with them. Did you use qwen-code, Continue, Roo-Code, etc, which models were you using with them, how did you decide which parts of the project to apply them to, how did you limit scope, what structure did you provide them with that you've written up front?
Of course I do, and I don't find the process is anything like working with a junior dev, which I have done extensively I might add. It's a process closer to using a very clever auto complete in my experience.
Incidentally, writing tests is an excellent use case for LLMs because tests tend to consist of isolated functions that test a specific thing. It's easy to see if the test is doing what's intended, and LLM can crap out a lot of boilerplate for the tests quickly to test a lot of edge cases that's tedious to do by hand.
What I'm telling you is that I use LLMs on daily basis to implement functions, UI components, API endpoints, http client calls, and so on. I'm not experiencing the problems which you insist I should be experiencing in terms of code maintainability, testing, or documentation.
It took me a few months to develop intuition for where LLMs are likely to produce code that's useful, and where they're likely to fail. It also took me a bit of time to figure out how to limit scope, and provide enough scaffolding to ensure that I get good results. Having invested the time to learn to use the tool effectively, I very much see the benefits as I'm able to work with Js effectively and ship stuff despite being very new to the ecosystem.
I also find LLMs are great for digging through existing codebases, and finding parts that you want to change when you add features. This has always been a huge pain when starting on large projects, and these things drastically reduce ramp up time for me.
You're telling me that your experience is wildly different from mine and you don't find these tools save any time, hence why my impression is that you might not have spend the time to actually learn to use them effectively. If you come in with a mindset that the tool is not useful, then you fiddle around with it and get bad results, that simply confirms your existing bias, and you move on.
I think you should read it again and rethink this response.
You've said quite a few things, much of which make no technical sense. Mostly in response to my critiques of statements like this, such as reminding you that you still need to do the harder parts of maintenance, design, reviewing those modules, etc. I have repeatedly noted security as a topic where care must be taken, it must be hands-on, and you should not rely on black box thinking for anything important. I noted that common design patterns regarding state will inevitably means these components will not be things you can treat like black boxes, you will need to maintain them. I noted that others needing to read your code will need to have semantically named content to more easily understand it and that the writing portion of simple components is much less important (and time-sucking) than making a coherent and intentional design. I critiqued the idea of these LLMs producing idiomatic code, a claim you introduced, and you got confused about the topic, treating my challenge about newer idioms as a suggestion to follow fads as if it were silly for these LLMs to... produce idiomatic code. I explicitly noted inconsistencies like this.
You largely ignored these responses or seemingly misunderstood them, responding in ways that made no technical sense. I attempted to clarify, giving you many opportunities to recognize where we agree or reframe your responses.
Then you went meta and suggested this discussion was pointless. Now we are here, with you insisting you've responded to all of my germane points (you absolutely have not) and then repeating your original position for no reason.
If you'd like we can revisit all of them. Every point misunderstood, every technical error, every point ignored.
No, you were not clear in what you meant, you were actually incorrect in your statements re: purity regarding functional programming. It is difficult to be less clear than stating something clearly technically false on a technical matter. Your claim simply does not apply to your described codebase. I have offered a few interpretations of what you might be meaning to say. Are any of my interpretations correct?
If the nature of calls and design in a React application are just implementation details then why are you talking about them so much? OOP vs FP? Purity of functions? Definitions vs. instantiations? As a reminder this all stems from me simply pointing out that LLMs will produce fairly boilerplate outputs that won't account for important design questions - like dependency injection. And you can't just ignore and separate this if you actually use those design decisions.
You're saying semantically incorrect things. I am attempting to present correction in a way that can be received, as you seem to not know what these terms mean or how React works, otherwise you wouldn't make such frequent mistakes. These mistakes should not be a big deal but the response of trying to justify or ignore them is not easy to work with.
JS has no concept of data ownership or when variables go in and out of scope (except for the GC). I have no idea what you're talking about. These are not correct or meaningful statements, though I'm sure they mean something to you.
They prevent your components from being pure... You are objectively wrong in how you described this strategy. Was my attempt to fix it and redescribe what you might be talking about correct? I have no idea, you presented only something in the form of counterargument that didn't directly address what I said.
This is absurd.
Yes, what did I say about those things? What was my meaning? I was fairly explicit when I introduced those terms. That you have forgotten or did not try to understand in the first place is not a counterpoint.
If you will recall, I brought up dependency injection as something that LLMs will not produce by default as it is a design question (aside from existing patterns from boilerplate). This was just an example of the kind of design question that often touches fairly deeply on the application and how it has been modularized and generalized. There are many abstractikns like this that would not jibe with black box codegen.
If you will recall, I brought up Memoization with regards to you saying the LLMs produce idiomatic code. Choosing when to memoize and which variables to memoize is a reasoning problem and not one that LLMs do a very good job of, as they primarily reproduce patterns. I'm sure they do introduce memoization sometimes, but you'll still need to think about this yourself.
Notice that these are two fairly separate critiques. I presented them separately and with distinct rationales.
You not running into them, to me, just suggests that you don't use these kinda of things at all because your projects are very simple and not much design goes into them. They may not need more design than they have, so I am not criticizing that. But "I haven't run into these problems" isn't meaningful without acknowledging what those problems are, how they might cause issues, and asking whether they would have presented themselves in any of your projects. And both are just examples of the type of challenge they represent for codegen.
As I explained, they are all problematic, just some are often outaide our control. If your API calls are to data with internal consistency or references, foe example, you may run into conflicting state information about the same server-side state because separate components are responsible for fetching that state independently - not in sync. Library developers for fetching web data often use complicated caching strategies for this reason. In the case of web data you might not even have control of when that state mutates!
And you'd be wrong about that. Every React app that drills down into components with callbacks or uses contexts will use and potentially modify shared state when instantiated but can be developed as a modular component. They are essentially using dependency injection, a lucky coincidence.
The programmatic nature of these components as having side effects is the same as for components fetching state from the web. The difference is in what you have control over.
You didn't ask for any of that and for infosec reasons I wouldn't describe my dev setup anyways. You seem to be inventing reasons to be dismissive ad hoc, as if I should have read your mind before you deigned to listen to my clearly uneducated opinions, right?
Either you're doing a full review of the code or you aren't. If you are, it is like reading anyone else's code, and with the propensity for being just a bit to very wrong for one's stated needs and matching documentation/stackoverflow a little too closely, quite similar to a junior dev. If you aren't, this isn't a serious production system and you're black boxing it.
Tests are maybe the worst application of LLMs. Tests are where you, the designer, get to specify how your application is supposed to behave. If your tests are that tedious, I suspect your tooling is wrong or your architecture is wrong.
Not a single one of these 4 paragraphs points out an error in my thinking re: LLMs.