this post was submitted on 22 Aug 2025
77 points (100.0% liked)
technology
23926 readers
52 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
I find they have practical uses once you spend the time to figure out what they can do well. For example, for coding, they can do a pretty good job of making a UI from a json payload, crafting SQL queries, making endpoints, and so on. Any fairly common task that involves boilerplate code, you'll likely get something decent to work with. I also find that sketching out the structure of the code you want by writing the signatures for the functions and then having LLM fill them in works pretty reliably. Where things go off the rails is when you give them too broad a task, or ask them to do something domain specific. And as a rule, if they don't get the task done in one shot, then there's very little chance they can fix the problem by iterating.
They're also great for working with languages you're not terrible familiar with. For example, I had to work on a Js project using React, and I haven't touched either in years. I know exactly what I want to do, and how I want the code structured, but I don't know the nitty gritty of the language. LLMs are a perfect bridge here because they'll give you idiomatic code without you having to constantly looks stuff up.
Overall, they can definitely save you time, but they're not a replacement for a human developer, and the time saving is mostly a quality of life improvement for the developer as opposed to some transformational benefit in how you work. And here's the rub in terms of a business model. Having what's effectively a really fancy autocomplete isn't really the transformative technology companies like OpenAI were promising.
With React I would be surprised if it was really idiomatic. The idioms change every couple years and have state management quirks.
It uses hooks and functional components which are the way most people are doing it from what I know. I also find the code DeepSeek and Qwen produce is generally pretty clear and to the point. At the end of the day what really matters is that you have clean code that you're going to be able to maintain.
I also find that you can treat components as black boxes. As long as it's behaving the way that's intended it doesn't really matter how it's implemented internally. And now with LLMs it matters even less because the cost of creating a new component from scratch is pretty low.
I hadn't heard of Qwen. I have only used Deep Seek, and not much. What are Qwen's advantages over Deep Seek? And is there any other model from BRICS countries I should look for? Preferably open source.
And do you recommened a local solution? For which use-case? I have a mid-range gamer laptop. IIRC it has 6GiB VRAM (NVIDIA).
I've found Qwen is overall similar, their smaller model that you can run locally tends to produce somewhat better output in my experience. Another recent open source model that's good at coding is GLM https://z.ai/blog/glm-4.5
6gb vram is unfortunately somewhat low, you can run smaller models but the quality of output is not amazing.
Does it memoize with the right selection of stateful variables by default? I can't imagine it does without a very specific prompt or unless it is very simple boilerplate TODO app stuff. How about nested state using contexts? I'm sure it can do this but will it know how best to do so and use it by default?
In my experience, LLMs produce a less repeatable and correct version of what codegen tools do, more or less. You get a lot of repetition and inappropriate abstractions.
Also just for context, hooks and functional components are about 6-7 years old.
I tend to use it to generate general boilerplate. Like say I have to talk to some endpoint and I get a JSON payload back. It can figure out how to call the endpoint, look at the payload, and then generate a component that will render the data in a sensible way. From there, I can pick it up and add whatever specific features I need. I generally find letting these things do design isn't terribly productive, so you are better off deciding on how to manage state, what to memoize, etc. on your own.
I also find the quality of the tools is improving very quickly. If you haven't used them in half a year or so, your experience is already dated. You get by far the biggest bang for your buck with editor integrated tools that can run MCP, where they can run code and look at output.
Finally, I personally don't see anything wrong with hooks/functional components even if there's already a new fad in Js land. The churn is absolutely insane to me, and I frankly don't understand how people keep up with this. You can start a project in Js, and by the time you finish it the Js world has already moved on to some new bullshit.
I used to work with ClojureScript when I needed frontend functionality before. There's a React wrapper called Reagent. It's basically a better version of hooks/functional components, it worked this way for over a decade. In that time, React itself went through a dozen different ways of doing things. The value gained has been rather unclear to me.
Yes I'm sure it can do a lot of boilerplate. I'm just saying I doubt it is very idiomatic. It is essentially a souped-up regurgitation machine drawing from a collection of a bunch of open source code over a long period of time and quality as well as documentation.
This can be fine for many purposes but if it is for a substantial project that other people will need to maintain I would suspect it is a technical debt generator. As the saying goes, code is read more than it is written. Writing the code is usually the easy part. Producing a maintainable design and structure is the hard part. That and naming things.
I mean all code is technical debt in the end, and given how quickly things move in Js land, it doesn't matter whether you're using LLMs or writing code by hand. By the time you finish your substantial project, it's pretty much guaranteed that it's legacy code. In fact, you'll be lucky if the libraries you used are still maintained. So, I don't really see this as a serious argument against using LLMs.
Meanwhile, as you note, what makes code maintainable isn't chasing latest fads. There's nothing that makes code written using hooks and functional components inherently less maintainable than whatever latest React trend happens to be.
And as I pointed out earlier, LLMs change the dynamic here somewhat because they significantly lower the time needed to produce certain types of codes. As such, you don't have to be attached to the code since you can simply generate a new version to fit new requirements.
Where having good design and structure really matters is at the high level of the project. I find the key part is structuring things in a way where you can reason about individual parts in isolation, which means avoiding coupling as much as possible.
I just explained why design and maintainability are the hard part and something LLMs don't do. LLMs lead to the bad habit of skipping these things, which junior devs do all the time, wasting a lot of resources. Just like a junior dev writing spaghetti can make a middle manager very happy because it's "delivered on time", they'll eventually have to pay in the form of maintenance far more than if better practices had been used.
Writing boilerplate React components that fetch JSON from APIs is the easy stuff that takes very little time. If you throw in intermediate things (like basic security) you will likely need to spend more time reviewing its slop than just doing it yourself. And it will likely be incapable of finding reasonable domain abstractions.
If it's for a throwaway project none of this really matters, of course.
If it is a production system with any prioritization of security it will need to be regularly maintained, including with library updates. If a library becomes unmaintained then one either needs to use a different one or start maintaining it themselves.
There are different ways to make code unmaintainable. It seems like you're saying writing code in JavaScript means you always do a rewrite when it comes time to do maintenance work (it moves fast!). This is just not true and is something easily mitigated by good design practices. And in terms of any org structure, you are much less likely to get a green light on a rewrite than on maintaining the production system that "works".
I'm not sure what you mean by this. When I said hooks and functional components were 6 years old it was in the context of doubting whether LLMs are up on modern idioms. You said it wrote idiomatic code, citing 6-7 year old idioms. That's not great evidence because they are long-established over several major version releases and would be a major input to these LLMs. I mentioned a few newer ones and asked whether they were generated for your code.
React written with hooks and functional components is more maintainable than legacy implementations because it will match the official documentation and is a better semantic match to what devs want to do.
I don't get attached to code...
LLMs do the easy part and then immediately require you to do the harder parts (review and maintenance) or scrap what they generate for the hardest parts (proper design and abstractions). Being satisfied with this kind of output really just means having no maintenance plans.
It matters at all levels, right down to the nouns and adjectives used to describe variables, objects, database tables, etc. Avoiding coupling will mean knowing when to use something like dependency injection, which I guarantee LLMs will not do reliably, maybe even not at all unless it is the default pattern for an existing framework. Knowing to use dependency injection will depend on things like your knowledge of what will need to be variable going forward and whether it is easier to reason about behavior using that pattern in your specific context. If using domain model classes, are implementing an abstract method or are they passed the implementation and just know how to call it? Etc etc.
Ok, but I've repeatedly stated in this very thread that design is something the developer should do. Are you even reading what I'm writing here?
I have to ask whether you actually worked with these tools seriously for any period of time, because I have and what you're claiming is directly at odds with my experience.
Not sure what this has to do with code written by LLMs. If I have a React component it has fuck all to do with me updating libraries in the project. Furthermore, LLMs are actually quite decent at doing mechanical tasks like updating code to match API changes in libraries.
No, I'm saying the exact opposite which is that you shouldn't try to chase fads.
You might have to clarify which particular fad you're on currently, because I haven't been keeping up. However, I do see hooks and functional components used commonly today.
Everybody gets attached to code, it's inevitable. If you have a bunch of code to solve a particular problem and it takes a lot of effort to rewrite it, then you're not going to throw it away easily. When your requirements start changing, it makes sense to try to adapt existing code to them rather than write code from scratch.
It really doesn't if you have good interfaces between components. You don't inspect all the code in libraries you include, you focus on the API of the library instead. The same logic applies here. If you structure your project into isolated components with clear boundaries, then your focus is on how the component behaves at the API level.
Again, I'm not suggesting using LLMs to do design. My whole point was that you do the design, and you use LLM to fill in the blanks. In this context, you've already figured out what the component will be and what scope it has, the LLM can help create features within that scope.
Also, things like dependency injection are an artifact of OO programming style which I find to be an anti pattern to begin with. With functional style, you naturally pass context as parameters, and you structure your code using pure functions that take some arguments and produce some result. You can snap these functions together like Lego pieces and you can test them individually. This meshes quite well with using LLMs to generate code and evaluate whether it's doing what you want.
Have you written code in styles other than OO?
Yes, and I'm replying in-context:
I say that code is read more than it is written and that a boilerplate generator is going to be a technical debt machine that had no thought for maintenance.
You say that all code is technical debt and don't engage further with that point.
I reiterate that design and maintainability are what LLMs are bad at and using them for the easy stuff isn't exactly a boon.
I'll repeat again: LLMs are technical debt machines. There are ways to have more or less debt, it is not all the same regardless, and LLMs will just repeat patterns they've seen before, and these may be bad designs, bad patterns, that are difficult to maintain for any project used or maintained by real people. You will then need to, of course, check the result, which is the thing that is less easy and takes longer, so very little has been optimized.
Or this is a throwaway project and all of this is moot.
Which part of what I said is at odds with what you're saying? I've used and rejected these tools as a waste of time and something to help others avoid.
It's a direct response to what I quoted where you dismiss maintainability concerns by saying all substantial projects are legacy code and you'll be lucky if libraries are maintained. Is... is that not obvious?
I have no idea what this means. React is a library and it also needs to be updated, not to mention the request hook library(ies) your boilerplate would presumably use. If you are maintaining this project and it has any security aspects you'll need to update these things over time.
If you're updating libraries to ensure security I would hope you're not just trusting the outputs from such edits. You should review them as if an incompetent junior dev submitted a PR. I'm sure it can do a 90% okay job 70% of the time.
But we haven't discussed fads at all?
So basic modern idioms for an extremely popular UI library are just fads now? I've already asked about memoization (using currently documented recommendations) and contexts. You are being inconsistent, here: originally it was doing great writing idiomatic code. Once I question its ability to correctly recognize the need for and implement idioms from library improvements in the last 5 years I'm just chasing fads. Seems like it's not really thay good at idiomatic code, eh?
Nope. I don't. The exact opposite is more common with more experience: you get the itch to rewrite. Throw the whole thing in the trash and start over. But this is not a practical use of time 9 times out of 10.
Oh I definitely am because I can now see how to remove the complexity and make it more versatile.
Changing requirements is a design process challenge. It indicates prototyping or gross incompetence if it's happening frequently. Sometimes that incompetence is out of one's hands, like a client that refuses to communicate effectively, but it is less about the code and more about social skills and planning.
If requirements change it can imply modifying an existing codebase ir throwing it all away to start over. It really depends on what the new requirements are. If suddenly there is a need for a very flexible event-driven strategy, you probably need to start over.
It actually really does because it enables capturing and therefore correctly addressing domain problems. Junior devs call meaningful variables "x" and "y" because they can't see this yet, then take forever during a review ir maintenance cycle weeks later because they are tryinh to figure out the meaning and function if meaninglessly named and organized things. In React these would be the same people putting every component module in a module called "components" even for large projects.
"Good interfaces" doesn't address the points I'm raising at all. The best interfaces cannot fix the wrong abstraction being chosen. If your domain needs to represent accounts but you never represent actions taken directly (e.g. "deduct $34.56") you will end up with a confusing mess.
Until it comes time to maintain your systems and you find that your domain elements are not represented and you don't understand the variables because you let a markov process write your components.
There is another inconsistency here. What you're repeatedly getting at is that you want to treat the things generated by LLMs and things you can wall off and regenerate, machine-edit, etc as needed. Implicit here is that you don't need to actually maintain that code, but actually you do if you have any seriousness about the project. Certainty in function. Security. Ever needing to refactor to add new features. And then this will fall apart because you now have to wade through whatever the LLM generated with all the warts I've described.
Or this is a throwaway project and none of this matters. No real design elements, no security, no user testing, etc. Which is fine, just not a great endorsement for LLMs.
Using dependency injection is a basic pattern. While it is part of design, it is also part of implementation and it changes how the rest of a codebase may function. React uses it a lot in its implementation. It is in no way separate from the task of having LLMs write components and they will produce wrong results without taking this into account. And this is just one example.
Of course, people can make all of these mistakes. But if they are following an intentional design, they will make fewer of them and therefore need less review.
These things are not separable a la a black box. The problems I have noted remain.
I'm sure it can with 90% efficacy for 70% of simple features. And then you need to rename half of the things. And write a bunch of tests. And do a security audit. Accessibility audit?
Anyways the point was that LLMs don't actually understand these things, just patterns, so they fall on their face for choosing something like dependency injection. There are many things like dependency injection and it won't recognize when to use them. You say this is left up to design, but it will be interwoven with implementation, even of components - the thing you're having the LLM do.
This is incorrect. Dependency injection is a ubiquitous pattern. It's even used in functional programming. And you'd better get stoked about OOP if you are using JS because just about everything is an object. Including the functions you're calling to generate components. And if you dive into the libraries you're using they will be chock full of semi-complex classes.
Can you give me an example of when you have passed a function as an argument to a function? Think about what that pattern entails, how the function is used, how and when it is called, and the name of the method (if applicable) doing the calling.
Modularity and testability is basically just as easy and powerful between OOP and FP. Though I do want to emphasize that you are actually using a lot of classes and objects. Every time you call a method, for example. "await fetchedData.resolve()" etc etc.
lmao
I feel like we're just going in circles here. I really don't have anything new to add over what I've already said above.
Literally any higher order function lmfao.
Cool then you have done a form of dependency injection with FP.
I don't think we are talking in circles, I feel very on top of this conversation. But we don't have to continue it.
I think we're going in circles because you've simply reiterated the points I've already addressed, and I don't see the point of me restating them so that you can restate yours again. But we can keep going if you feel this is productive.
I've done the equivalent of dependency injection with FP, which consists of passing a function to another. The point I was making there is that you can keep vast majority of your code pure with FP. This means you can test functions individually and you only have to consider their signature going forward. Any context the function needs is passed in explicitly. In this way you're able to push state to the edges of the application. This is far more difficult to do with OO because each object is effectively a state machine, and your application is a graph of these interdependent state machines.
The reason I brought up FP in the context of LLMs is due to the fact that this model fits very well with creating small components that can be reasoned about in isolation and composed to build bigger things. This way the developer can focus on the high level logic and composition, while letting the LLM fill in the details.
I've asked you earlier if you've recently used these tools seriously for any period of time, if you have not then I don't see much point having an argument regarding your assumptions of how these tools work.
Have my points been addressed? I don't think so, on average. This is why I explained and clarified: to allow you to respond in a different way, as some of the responses miss the mark by a mile. I assumed some of that was just a communication issue.
So far I don't think it's been particularly productive, but I always give the benefit of the doubt.
Technically dependency injection just means providing control of internal behaviors to the caller, so a generous interpretation could even include non-callable parameters or other things that aren't directly functions. I just used this example because I think it is the most intuitive.
To be clear, every single one of the React components you let an LLM write for you is actually a class for which an instance is later generated, complete with opportunities for side effects. The JXS/TSX is translated into this by your build system. Reducing or removing side effects does indeed have many advantages and this is easier to do with FP (though purity in a browser context is often thwarted), but I'm not sure why you are making this point. Is this in response to something I said?
These things are not actually related. You can have tons of side effects even when you make separate components, it all depends on how they are implemented. Modularity and side effects are orthogonal things.
For example, you mention that the React components fetch and display JSON. This behavior is actually a stateful side effect. To tie back to semantic naming, the React hook libraries that people usually use for this task all use the
useEffect
hook under the hood - "Effect" as in side effect. This is why if you test those components you need to deeply mock those side effects, e.g. using a testing-friendly library for web API tasks.Leaving components up to LLMs does not obviate the need to work around the issues I've previously mentioned, including maintenance work.
I don't need to assume how they work, I understand how LLMs function from the ground up and, as I said before, have used and rejected them as a waste of time re: coding. They automate the easy stuff, doing a worse job than repeatable codegen tools, and I still need to do the harder things while checking their work.
It is like starting a fresh project with complete newbie junior devs. Until they are trained and understand the domain, they will be a drain on the project. It takes months to years for them to be more productive than not. This is fine, it's how training works in general, but shackling myself to this on purpose without any downstream training benefit is a waste of my time.
Alright, which specific points do you want me to address here that you feel I haven't?
It doesn't matter one bit in the context of what I'm saying. My point, once again, is that if I can pass the context to the component when I call it, then I can reason about it in isolation. If I make a React widget and I test that it's doing what I need it to, then I can use it anywhere I want from that point on by passing it the data it will render.
The reason I'm making this point, as I've repeatedly explained already, is that I can have the LLM make a component, test that it does what I need, and use it. If I need something new, I can make a new component. The part that matters to me is that it does the correct thing with the data provided.
Modularity and side effects are absolutely not orthogonal. If you have side effects, as in functions that take references to shared state as input and manipulate them, then you can't have modularity because you can't reason about individual pieces of code in isolation. You have to consider all the code that has references to the state you're modifying. If we're talking about side effects such as rendering data to screen or doing IO, these aren't problematic in the same way. Even in Js land you can mitigate the problem easily by using one of many immutable data structure libraries.
I never said that leaving components to LLM obviates the need to do maintenance work. However, my experience is that it's very cheap now to just make a new component when you need one.
So, in other words, you haven't actually used these tools for any period of time, nor have you used them recently. You are substituting your obvious biases for actual experience here.
It's absolutely nothing like that, but it's pretty clear that you've already made up your mind and I'm not going to try to convince you otherwise here. All I can tell you is that my experience having used these tools extensively for the past half a year or so is completely at odds with the way you think they work.
I think almost everything in my comment a few steps back before you suggested hitting pause would count. I'm not saying you need to reply to it, just pointing to an example. If I had to make them specific I would end up restating most of that comment, which seems unnecessary.
It matters in that you're saying things that don't make sense re: pure functions. Those components are not pure at all and in fact are not functions in reality. To be clear, you're describing your logic in these terms. I'm not being pedantic, I am just trying to deal with responses that don't make sense.
Regarding your clarification, I am still somewhat confused. For the most part, in React, you are not calling components. This is done for you by the
App
class deep behind the scenes. When using (as opposed to defining) your components you are structuring them as if they were markup with dynamic arguments, all of which gets translated into something of a different nature when it runs in a browser.I believe you are trying to describe presentational components? Those are the "pure-ish" equivalent for React components. Same input, same output. Any React component using
useEffect
(and some other hooks) will not have the properties you describe - such as one fetching JSON from a web API. Hence what I said about mocking for testing.And it will have every downside I have described. This does not contradict my criticisms. My criticisms have already included characterizing your stated approach in these terms - and then noted their problems.
Yes they are. You can have non-modular with no side effects, modular with no side effects, non-modular with side effects, and modular with side effects. You can introduce side effects at any level of an applicatiom. A codebase that is not modular is likelier to colocate more state and have harder to reason about side effects (if present), but their presence is orthogonal.
Side effects are anything outside the immediate scope of something being called that can change its state. This applies to any kind of thing that can be run/called and any way of having state change due to external factors. The state doesn't need to be shared with anything inside your application. Like the JSON being provided to your components.
Modularity is a different thing. It is about breaking down your code into reusable pieces, often isolating their state. I can see your reasoning, which is that good modularity means minimizing sharing a state e.g. a global variable or shared reference, but this is not the only way to have side effects and you can have a very modular application that nevertheless has many and a non-modular application with none.
They are problematic in the same way they're just usually outside your control or are necessary for needed behaviors (e.g. getting screen size). As they contain important state information, you should think of them as the full application context. And this is why you have to do a lot of mocking for front end JS code testing.
Immutable data structures are a way to limit side effects but they don't impact modularity directly, they just make it easier to avoid it because your application won't work as expected if you try to use them as shared mutable state.
And I didn't say you did. What did I say about maintainability of code generated by LLMs?
I also critiqued that.
This is literally the opposite of what I've said about this twice now.
I haven't said anything about that, actually. You may want to take a step back from guessing and conflating this with knowledge.
My biases are based on actual experience as well as knowledge of the limitations of LLMs and the domain of software architecture. Please do your best to engage with what I'm saying instead of fishing for reasons to be personally dismissive, comrade.
Oh? So do you not review the generated code? Look for bugs? Add or change documentation? Question the levels of abstraction? Write tests to ensure specific behaviors not covered? Go through multiple rounds of this until a more correct and maintainable version exists? If not then this should not be a production system nor a project any other person ever needs to work on.
Please do your best to specifically address what I'm saying instead of reasons to be personally dismissive.
How so? So far I don't believe you've named a single error in my thinking regarding how these LLMs work.
I really don't see what new points your comment adds that we haven't already discussed. I've repeatedly pointed out that what I find LLMs to be effective at is doing surface level things, or building out isolated components that can be reasoned about and tested in isolation. I really don't see how the points you raise apply here.
I think I was pretty clear what I meant. I was talking about coupling via shared mutable state. This is the aspect that I find makes code difficult to reason about.
You're talking about implementation details here. I'm talking about the semantics of how you use and reason about these components. Once again, what you really care about is scope. What data is passed to a component and what the ownership of this data is.
Hooks are obviously not pure, but they don't inherently prevent reasoning about your components or testing them in isolation.
Frankly, I'm not even sure what your criticisms are specifically. You've made some assertions about dependency injection, memoiztion, and so on, and claimed that this is somehow problematic when you have LLMs generate code. I've personally not run into these issues, so this argument isn't really making a lot of sense to me.
The type of side effects we're talking about matters. The problematic kind are the ones that result in coupling via implicit shared mutable state as a result of passing and modifying references to shared data.
What I'm saying is that avoiding shared mutable state is a prerequisite for modularity. You certainly can create coupling in other ways, but if you have shared state then you're inherently coupled from the start.
What you said was
You didn't say which tools you used, you didn't say when you used them, or how much time you've actually invested in learning and becoming productive with them. Did you use qwen-code, Continue, Roo-Code, etc, which models were you using with them, how did you decide which parts of the project to apply them to, how did you limit scope, what structure did you provide them with that you've written up front?
Of course I do, and I don't find the process is anything like working with a junior dev, which I have done extensively I might add. It's a process closer to using a very clever auto complete in my experience.
Incidentally, writing tests is an excellent use case for LLMs because tests tend to consist of isolated functions that test a specific thing. It's easy to see if the test is doing what's intended, and LLM can crap out a lot of boilerplate for the tests quickly to test a lot of edge cases that's tedious to do by hand.
What I'm telling you is that I use LLMs on daily basis to implement functions, UI components, API endpoints, http client calls, and so on. I'm not experiencing the problems which you insist I should be experiencing in terms of code maintainability, testing, or documentation.
It took me a few months to develop intuition for where LLMs are likely to produce code that's useful, and where they're likely to fail. It also took me a bit of time to figure out how to limit scope, and provide enough scaffolding to ensure that I get good results. Having invested the time to learn to use the tool effectively, I very much see the benefits as I'm able to work with Js effectively and ship stuff despite being very new to the ecosystem.
I also find LLMs are great for digging through existing codebases, and finding parts that you want to change when you add features. This has always been a huge pain when starting on large projects, and these things drastically reduce ramp up time for me.
You're telling me that your experience is wildly different from mine and you don't find these tools save any time, hence why my impression is that you might not have spend the time to actually learn to use them effectively. If you come in with a mindset that the tool is not useful, then you fiddle around with it and get bad results, that simply confirms your existing bias, and you move on.
I think that's going to change now though, as a result of LLMs. We're going to be stuck with whatever was the norm when the data was harvested, forever
Assuming the use of these tools is dominant over library developers. Which I don't think it will be. But they may write their libraries in a way that is meant to be LLM-friendly. Simple, repetitious, and with documentation and building blocks that are easily associated with semi-competent dev instructions.
I find Gemini really useful for coding, but as you say it's no replacement for a human coder, not least because of the way it fails silently e.g. it will always ime come up with the hackiest solution imaginable for any sort of race condition, so someone has to be there to say WTF GEMINI, ARE YOU DRUNK. I think there is something kind of transformative about it — it's like going from a bicycle to a car. But the thing is both need to be driven, and the latter has the potential to fail even harder
Exactly, it's a tool, and if you learn to use it then it can save you a lot of time, but it's not magic and it's not a substitute for understanding what you're doing.