77
you are viewing a single comment's thread
view the rest of the comments
[-] Chana@hexbear.net 1 points 11 hours ago

I think we're going in circles because you've simply reiterated the points I've already addressed, and I don't see the point of me restating them so that you can restate yours again.

Have my points been addressed? I don't think so, on average. This is why I explained and clarified: to allow you to respond in a different way, as some of the responses miss the mark by a mile. I assumed some of that was just a communication issue.

But we can keep going if you feel this is productive.

So far I don't think it's been particularly productive, but I always give the benefit of the doubt.

I've done the equivalent of dependency injection with FP, which consists of passing a function to another.

Technically dependency injection just means providing control of internal behaviors to the caller, so a generous interpretation could even include non-callable parameters or other things that aren't directly functions. I just used this example because I think it is the most intuitive.

The point I was making there is that you can keep vast majority of your code pure with FP. This means you can test functions individually and you only have to consider their signature going forward. Any context the function needs is passed in explicitly. In this way you're able to push state to the edges of the application. This is far more difficult to do with OO because each object is effectively a state machine, and your application is a graph of these interdependent state machines.

To be clear, every single one of the React components you let an LLM write for you is actually a class for which an instance is later generated, complete with opportunities for side effects. The JXS/TSX is translated into this by your build system. Reducing or removing side effects does indeed have many advantages and this is easier to do with FP (though purity in a browser context is often thwarted), but I'm not sure why you are making this point. Is this in response to something I said?

The reason I brought up FP in the context of LLMs is due to the fact that this model fits very well with creating small components that can be reasoned about in isolation and composed to build bigger things. This way the developer can focus on the high level logic and composition, while letting the LLM fill in the details.

These things are not actually related. You can have tons of side effects even when you make separate components, it all depends on how they are implemented. Modularity and side effects are orthogonal things.

For example, you mention that the React components fetch and display JSON. This behavior is actually a stateful side effect. To tie back to semantic naming, the React hook libraries that people usually use for this task all use the useEffect hook under the hood - "Effect" as in side effect. This is why if you test those components you need to deeply mock those side effects, e.g. using a testing-friendly library for web API tasks.

Leaving components up to LLMs does not obviate the need to work around the issues I've previously mentioned, including maintenance work.

I've asked you earlier if you've recently used these tools seriously for any period of time, if you have not then I don't see much point having an argument regarding your assumptions of how these tools work.

I don't need to assume how they work, I understand how LLMs function from the ground up and, as I said before, have used and rejected them as a waste of time re: coding. They automate the easy stuff, doing a worse job than repeatable codegen tools, and I still need to do the harder things while checking their work.

It is like starting a fresh project with complete newbie junior devs. Until they are trained and understand the domain, they will be a drain on the project. It takes months to years for them to be more productive than not. This is fine, it's how training works in general, but shackling myself to this on purpose without any downstream training benefit is a waste of my time.

[-] yogthos@lemmygrad.ml 1 points 11 hours ago

Have my points been addressed? I don’t think so, on average. This is why I explained and clarified: to allow you to respond in a different way, as some of the responses miss the mark by a mile. I assumed some of that was just a communication issue.

Alright, which specific points do you want me to address here that you feel I haven't?

To be clear, every single one of the React components you let an LLM write for you is actually a class for which an instance is later generated, complete with opportunities for side effects.

It doesn't matter one bit in the context of what I'm saying. My point, once again, is that if I can pass the context to the component when I call it, then I can reason about it in isolation. If I make a React widget and I test that it's doing what I need it to, then I can use it anywhere I want from that point on by passing it the data it will render.

The reason I'm making this point, as I've repeatedly explained already, is that I can have the LLM make a component, test that it does what I need, and use it. If I need something new, I can make a new component. The part that matters to me is that it does the correct thing with the data provided.

These things are not actually related. You can have tons of side effects even when you make separate components, it all depends on how they are implemented. Modularity and side effects are orthogonal things.

Modularity and side effects are absolutely not orthogonal. If you have side effects, as in functions that take references to shared state as input and manipulate them, then you can't have modularity because you can't reason about individual pieces of code in isolation. You have to consider all the code that has references to the state you're modifying. If we're talking about side effects such as rendering data to screen or doing IO, these aren't problematic in the same way. Even in Js land you can mitigate the problem easily by using one of many immutable data structure libraries.

Leaving components up to LLMs does not obviate the need to work around the issues I’ve previously mentioned, including maintenance work.

I never said that leaving components to LLM obviates the need to do maintenance work. However, my experience is that it's very cheap now to just make a new component when you need one.

I don’t need to assume how they work, I understand how LLMs function from the ground up and, as I said before, have used and rejected them as a waste of time re: coding. They automate the easy stuff, doing a worse job than repeatable codegen tools, and I still need to do the harder things while checking their work.

So, in other words, you haven't actually used these tools for any period of time, nor have you used them recently. You are substituting your obvious biases for actual experience here.

It is like starting a fresh project with complete newbie junior devs.

It's absolutely nothing like that, but it's pretty clear that you've already made up your mind and I'm not going to try to convince you otherwise here. All I can tell you is that my experience having used these tools extensively for the past half a year or so is completely at odds with the way you think they work.

[-] Chana@hexbear.net 1 points 10 hours ago

Alright, which specific points do you want me to address here that you feel I haven't?

I think almost everything in my comment a few steps back before you suggested hitting pause would count. I'm not saying you need to reply to it, just pointing to an example. If I had to make them specific I would end up restating most of that comment, which seems unnecessary.

It doesn't matter one bit in the context of what I'm saying. My point, once again, is that if I can pass the context to the component when I call it, then I can reason about it in isolation. If I make a React widget and I test that it's doing what I need it to, then I can use it anywhere I want from that point on by passing it the data it will render.

It matters in that you're saying things that don't make sense re: pure functions. Those components are not pure at all and in fact are not functions in reality. To be clear, you're describing your logic in these terms. I'm not being pedantic, I am just trying to deal with responses that don't make sense.

Regarding your clarification, I am still somewhat confused. For the most part, in React, you are not calling components. This is done for you by the App class deep behind the scenes. When using (as opposed to defining) your components you are structuring them as if they were markup with dynamic arguments, all of which gets translated into something of a different nature when it runs in a browser.

I believe you are trying to describe presentational components? Those are the "pure-ish" equivalent for React components. Same input, same output. Any React component using useEffect (and some other hooks) will not have the properties you describe - such as one fetching JSON from a web API. Hence what I said about mocking for testing.

The reason I'm making this point, as I've repeatedly explained already, is that I can have the LLM make a component, test that it does what I need, and use it. If I need something new, I can make a new component. The part that matters to me is that it does the correct thing with the data provided.

And it will have every downside I have described. This does not contradict my criticisms. My criticisms have already included characterizing your stated approach in these terms - and then noted their problems.

Modularity and side effects are absolutely not orthogonal.

Yes they are. You can have non-modular with no side effects, modular with no side effects, non-modular with side effects, and modular with side effects. You can introduce side effects at any level of an applicatiom. A codebase that is not modular is likelier to colocate more state and have harder to reason about side effects (if present), but their presence is orthogonal.

If you have side effects, as in functions that take references to shared state as input and manipulate them, then you can't have modularity because you can't reason about individual pieces of code in isolation.

Side effects are anything outside the immediate scope of something being called that can change its state. This applies to any kind of thing that can be run/called and any way of having state change due to external factors. The state doesn't need to be shared with anything inside your application. Like the JSON being provided to your components.

Modularity is a different thing. It is about breaking down your code into reusable pieces, often isolating their state. I can see your reasoning, which is that good modularity means minimizing sharing a state e.g. a global variable or shared reference, but this is not the only way to have side effects and you can have a very modular application that nevertheless has many and a non-modular application with none.

You have to consider all the code that has references to the state you're modifying. If we're talking about side effects such as rendering data to screen or doing IO, these aren't problematic in the same way. Even in Js land you can mitigate the problem easily by using one of many immutable data structure libraries.

They are problematic in the same way they're just usually outside your control or are necessary for needed behaviors (e.g. getting screen size). As they contain important state information, you should think of them as the full application context. And this is why you have to do a lot of mocking for front end JS code testing.

Immutable data structures are a way to limit side effects but they don't impact modularity directly, they just make it easier to avoid it because your application won't work as expected if you try to use them as shared mutable state.

I never said that leaving components to LLM obviates the need to do maintenance work.

And I didn't say you did. What did I say about maintainability of code generated by LLMs?

However, my experience is that it's very cheap now to just make a new component when you need one.

I also critiqued that.

So, in other words, you haven't actually used these tools for any period of time

This is literally the opposite of what I've said about this twice now.

nor have you used them recently.

I haven't said anything about that, actually. You may want to take a step back from guessing and conflating this with knowledge.

You are substituting your obvious biases for actual experience here.

My biases are based on actual experience as well as knowledge of the limitations of LLMs and the domain of software architecture. Please do your best to engage with what I'm saying instead of fishing for reasons to be personally dismissive, comrade.

It's absolutely nothing like that, but it's pretty clear that you've already made up your mind and I'm not going to try to convince you otherwise here.

Oh? So do you not review the generated code? Look for bugs? Add or change documentation? Question the levels of abstraction? Write tests to ensure specific behaviors not covered? Go through multiple rounds of this until a more correct and maintainable version exists? If not then this should not be a production system nor a project any other person ever needs to work on.

Please do your best to specifically address what I'm saying instead of reasons to be personally dismissive.

All I can tell you is that my experience having used these tools extensively for the past half a year or so is completely at odds with the way you think they work.

How so? So far I don't believe you've named a single error in my thinking regarding how these LLMs work.

[-] yogthos@lemmygrad.ml 1 points 9 hours ago

I think almost everything in my comment a few steps back before you suggested hitting pause would count. I’m not saying you need to reply to it, just pointing to an example. If I had to make them specific I would end up restating most of that comment, which seems unnecessary.

I really don't see what new points your comment adds that we haven't already discussed. I've repeatedly pointed out that what I find LLMs to be effective at is doing surface level things, or building out isolated components that can be reasoned about and tested in isolation. I really don't see how the points you raise apply here.

It matters in that you’re saying things that don’t make sense re: pure functions. Those components are not pure at all and in fact are not functions in reality. To be clear, you’re describing your logic in these terms. I’m not being pedantic, I am just trying to deal with responses that don’t make sense.

I think I was pretty clear what I meant. I was talking about coupling via shared mutable state. This is the aspect that I find makes code difficult to reason about.

Regarding your clarification, I am still somewhat confused. For the most part, in React, you are not calling components. This is done for you by the App class deep behind the scenes. When using (as opposed to defining) your components you are structuring them as if they were markup with dynamic arguments, all of which gets translated into something of a different nature when it runs in a browser.

You're talking about implementation details here. I'm talking about the semantics of how you use and reason about these components. Once again, what you really care about is scope. What data is passed to a component and what the ownership of this data is.

I believe you are trying to describe presentational components? Those are the “pure-ish” equivalent for React components. Same input, same output. Any React component using useEffect (and some other hooks) will not have the properties you describe - such as one fetching JSON from a web API. Hence what I said about mocking for testing.

Hooks are obviously not pure, but they don't inherently prevent reasoning about your components or testing them in isolation.

And it will have every downside I have described. This does not contradict my criticisms. My criticisms have already included characterizing your stated approach in these terms - and then noted their problems.

Frankly, I'm not even sure what your criticisms are specifically. You've made some assertions about dependency injection, memoiztion, and so on, and claimed that this is somehow problematic when you have LLMs generate code. I've personally not run into these issues, so this argument isn't really making a lot of sense to me.

A codebase that is not modular is likelier to colocate more state and have harder to reason about side effects (if present), but their presence is orthogonal.

The type of side effects we're talking about matters. The problematic kind are the ones that result in coupling via implicit shared mutable state as a result of passing and modifying references to shared data.

I can see your reasoning, which is that good modularity means minimizing sharing a state e.g. a global variable or shared reference, but this is not the only way to have side effects and you can have a very modular application that nevertheless has many and a non-modular application with none.

What I'm saying is that avoiding shared mutable state is a prerequisite for modularity. You certainly can create coupling in other ways, but if you have shared state then you're inherently coupled from the start.

This is literally the opposite of what I’ve said about this twice now.

What you said was

have used and rejected them as a waste of time re: coding. They automate the easy stuff, doing a worse job than repeatable codegen tools, and I still need to do the harder things while checking their work.

You didn't say which tools you used, you didn't say when you used them, or how much time you've actually invested in learning and becoming productive with them. Did you use qwen-code, Continue, Roo-Code, etc, which models were you using with them, how did you decide which parts of the project to apply them to, how did you limit scope, what structure did you provide them with that you've written up front?

Oh? So do you not review the generated code? Look for bugs?

Of course I do, and I don't find the process is anything like working with a junior dev, which I have done extensively I might add. It's a process closer to using a very clever auto complete in my experience.

Incidentally, writing tests is an excellent use case for LLMs because tests tend to consist of isolated functions that test a specific thing. It's easy to see if the test is doing what's intended, and LLM can crap out a lot of boilerplate for the tests quickly to test a lot of edge cases that's tedious to do by hand.

How so? So far I don’t believe you’ve named a single error in my thinking regarding how these LLMs work.

What I'm telling you is that I use LLMs on daily basis to implement functions, UI components, API endpoints, http client calls, and so on. I'm not experiencing the problems which you insist I should be experiencing in terms of code maintainability, testing, or documentation.

It took me a few months to develop intuition for where LLMs are likely to produce code that's useful, and where they're likely to fail. It also took me a bit of time to figure out how to limit scope, and provide enough scaffolding to ensure that I get good results. Having invested the time to learn to use the tool effectively, I very much see the benefits as I'm able to work with Js effectively and ship stuff despite being very new to the ecosystem.

I also find LLMs are great for digging through existing codebases, and finding parts that you want to change when you add features. This has always been a huge pain when starting on large projects, and these things drastically reduce ramp up time for me.

You're telling me that your experience is wildly different from mine and you don't find these tools save any time, hence why my impression is that you might not have spend the time to actually learn to use them effectively. If you come in with a mindset that the tool is not useful, then you fiddle around with it and get bad results, that simply confirms your existing bias, and you move on.

[-] Chana@hexbear.net 1 points 7 hours ago

I really don't see what new points your comment adds that we haven't already discussed.

I think you should read it again and rethink this response.

I've repeatedly pointed out that what I find LLMs to be effective at is doing surface level things, or building out isolated components that can be reasoned about and tested in isolation. I really don't see how the points you raise apply here.

You've said quite a few things, much of which make no technical sense. Mostly in response to my critiques of statements like this, such as reminding you that you still need to do the harder parts of maintenance, design, reviewing those modules, etc. I have repeatedly noted security as a topic where care must be taken, it must be hands-on, and you should not rely on black box thinking for anything important. I noted that common design patterns regarding state will inevitably means these components will not be things you can treat like black boxes, you will need to maintain them. I noted that others needing to read your code will need to have semantically named content to more easily understand it and that the writing portion of simple components is much less important (and time-sucking) than making a coherent and intentional design. I critiqued the idea of these LLMs producing idiomatic code, a claim you introduced, and you got confused about the topic, treating my challenge about newer idioms as a suggestion to follow fads as if it were silly for these LLMs to... produce idiomatic code. I explicitly noted inconsistencies like this.

You largely ignored these responses or seemingly misunderstood them, responding in ways that made no technical sense. I attempted to clarify, giving you many opportunities to recognize where we agree or reframe your responses.

Then you went meta and suggested this discussion was pointless. Now we are here, with you insisting you've responded to all of my germane points (you absolutely have not) and then repeating your original position for no reason.

If you'd like we can revisit all of them. Every point misunderstood, every technical error, every point ignored.

I think I was pretty clear what I meant. I was talking about coupling via shared mutable state. This is the aspect that I find makes code difficult to reason about.

No, you were not clear in what you meant, you were actually incorrect in your statements re: purity regarding functional programming. It is difficult to be less clear than stating something clearly technically false on a technical matter. Your claim simply does not apply to your described codebase. I have offered a few interpretations of what you might be meaning to say. Are any of my interpretations correct?

You're talking about implementation details here.

If the nature of calls and design in a React application are just implementation details then why are you talking about them so much? OOP vs FP? Purity of functions? Definitions vs. instantiations? As a reminder this all stems from me simply pointing out that LLMs will produce fairly boilerplate outputs that won't account for important design questions - like dependency injection. And you can't just ignore and separate this if you actually use those design decisions.

I'm talking about the semantics of how you use and reason about these components.

You're saying semantically incorrect things. I am attempting to present correction in a way that can be received, as you seem to not know what these terms mean or how React works, otherwise you wouldn't make such frequent mistakes. These mistakes should not be a big deal but the response of trying to justify or ignore them is not easy to work with.

Once again, what you really care about is scope. What data is passed to a component and what the ownership of this data is.

JS has no concept of data ownership or when variables go in and out of scope (except for the GC). I have no idea what you're talking about. These are not correct or meaningful statements, though I'm sure they mean something to you.

Hooks are obviously not pure, but they don't inherently prevent reasoning about your components or testing them in isolation.

They prevent your components from being pure... You are objectively wrong in how you described this strategy. Was my attempt to fix it and redescribe what you might be talking about correct? I have no idea, you presented only something in the form of counterargument that didn't directly address what I said.

Frankly, I'm not even sure what your criticisms are specifically.

This is absurd.

You've made some assertions about dependency injection, memoiztion, and so on, and claimed that this is somehow problematic when you have LLMs generate code.

Yes, what did I say about those things? What was my meaning? I was fairly explicit when I introduced those terms. That you have forgotten or did not try to understand in the first place is not a counterpoint.

I've personally not run into these issues, so this argument isn't really making a lot of sense to me.

If you will recall, I brought up dependency injection as something that LLMs will not produce by default as it is a design question (aside from existing patterns from boilerplate). This was just an example of the kind of design question that often touches fairly deeply on the application and how it has been modularized and generalized. There are many abstractikns like this that would not jibe with black box codegen.

If you will recall, I brought up Memoization with regards to you saying the LLMs produce idiomatic code. Choosing when to memoize and which variables to memoize is a reasoning problem and not one that LLMs do a very good job of, as they primarily reproduce patterns. I'm sure they do introduce memoization sometimes, but you'll still need to think about this yourself.

Notice that these are two fairly separate critiques. I presented them separately and with distinct rationales.

You not running into them, to me, just suggests that you don't use these kinda of things at all because your projects are very simple and not much design goes into them. They may not need more design than they have, so I am not criticizing that. But "I haven't run into these problems" isn't meaningful without acknowledging what those problems are, how they might cause issues, and asking whether they would have presented themselves in any of your projects. And both are just examples of the type of challenge they represent for codegen.

The type of side effects we're talking about matters. The problematic kind are the ones that result in coupling via implicit shared mutable state as a result of passing and modifying references to shared data.

As I explained, they are all problematic, just some are often outaide our control. If your API calls are to data with internal consistency or references, foe example, you may run into conflicting state information about the same server-side state because separate components are responsible for fetching that state independently - not in sync. Library developers for fetching web data often use complicated caching strategies for this reason. In the case of web data you might not even have control of when that state mutates!

What I'm saying is that avoiding shared mutable state is a prerequisite for modularity. You certainly can create coupling in other ways, but if you have shared state then you're inherently coupled from the start.

And you'd be wrong about that. Every React app that drills down into components with callbacks or uses contexts will use and potentially modify shared state when instantiated but can be developed as a modular component. They are essentially using dependency injection, a lucky coincidence.

The programmatic nature of these components as having side effects is the same as for components fetching state from the web. The difference is in what you have control over.

You didn't say which tools you used, you didn't say when you used them, or how much time you've actually invested in learning and becoming productive with them. Did you use qwen-code, Continue, Roo-Code, etc, which models were you using with them, how did you decide which parts of the project to apply them to, how did you limit scope, what structure did you provide them with that you've written up front?

You didn't ask for any of that and for infosec reasons I wouldn't describe my dev setup anyways. You seem to be inventing reasons to be dismissive ad hoc, as if I should have read your mind before you deigned to listen to my clearly uneducated opinions, right?

Of course I do, and I don't find the process is anything like working with a junior dev, which I have done extensively I might add. It's a process closer to using a very clever auto complete in my experience.

Either you're doing a full review of the code or you aren't. If you are, it is like reading anyone else's code, and with the propensity for being just a bit to very wrong for one's stated needs and matching documentation/stackoverflow a little too closely, quite similar to a junior dev. If you aren't, this isn't a serious production system and you're black boxing it.

Incidentally, writing tests is an excellent use case for LLMs because tests tend to consist of isolated functions that test a specific thing. It's easy to see if the test is doing what's intended, and LLM can crap out a lot of boilerplate for the tests quickly to test a lot of edge cases that's tedious to do by hand.

Tests are maybe the worst application of LLMs. Tests are where you, the designer, get to specify how your application is supposed to behave. If your tests are that tedious, I suspect your tooling is wrong or your architecture is wrong.

What I'm telling you is that I use LLMs on daily basis to implement functions, UI components, API endpoints, http client calls, and so on. (...)

Not a single one of these 4 paragraphs points out an error in my thinking re: LLMs.

this post was submitted on 22 Aug 2025
77 points (100.0% liked)

technology

23926 readers
68 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS