67
you are viewing a single comment's thread
view the rest of the comments
[-] piggy@hexbear.net 3 points 1 day ago

Okay let me ask this question:

Who is this useful for? Who is the target audience for this?

[-] yogthos@lemmygrad.ml 3 points 1 day ago

It's useful for me, I'm the target audience for this. I'm working on a React project right now, and I haven't touched Js in close to a decade. I know what I want to do conceptually, and I have plenty of experience designing applications. However, I'm not familiar with the nitty gritty of how React works and how to do what I want with it. This tool saves me a ton of time googling these things and wasting hours on sites like stack overflow.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

I know what I want to do conceptually, and I have plenty of experience designing applications.

How does AI help you actually traverse the concepts of React that you admit you don't have nitty gritty knowledge of how they work in terms of designing your application? React is a batteries included framework that has specific ways of doing things that impact the design and concepts that are technically feasible within React itself.

For example React isn't really optimized to crunch a ton of data performantly so if you're getting constant data updates over a web socket from multiple points and you want some or all the changes to be reflected you're gonna have a bad time vs something that has finer grained change controls out of the box such as Angular.

How does AI help you choose between functional and class based React components? How much of your application is doing typical developer copy-pasta instead of creating HOCs for similar functionalities? How did AI help you with that? How is AI helping apply concepts like SOLID into the design of your component tree? How does AI help you decide how to architect components and their children that need to have a lifecycle outside of the typical change-binding flow?

This in my opinion is the crux of the issue, AI cannot solve this problem for you nor can it reasonably explain it in a technical way beyond parroting the vagaries of what I said above. It cannot confer understanding of complex abstract concepts that are fuzzy and have grey areas. It can tell you something may not work explicitly but it cannot educate you realistically on the tradeoffs.

It seems to me that your answer boils down to "code monkey stuff". AI might help you swing a pickaxe, but it's not good at explaining where the mine is going to collapse based on the type of rock you're digging in. Another way of thinking about it is that you could build a building to the "building code" but it will still collapse. AI can explain the building code and loosely verify that you built something to it, but it cannot validate that your building is going to stay standing nor can it practically tell you what you need to change.

My problem with AI tools boils down to this. Software is a medium of communication. It communicates the base of a problem and the technical process of solving it. Software Engineering is a field that attempts to create strong patterns of communication and practices in order to efficiently organize the production of Software. The software industry at large (where most programmers get exposed to the process of building software) often eschews this discipline because of scientific management (the idea you can simply manage a process through fiduciary/managerial knowledge rather than domain knowledge) and the need for instant development to maintain fictional competitive advantage and fictional YoY growth. The industry welcomes AI for 2 reasons:

  1. It can code monkey...eventually. Why pay programmers when you can ask CahpGBT to do it?
  2. It can fix the problem of needing to deliver without knowing what you're doing... eventually. It fixes the problem of communication without relying on building up the knowledge and practice of Software Engineering. In essence why have people know this discipline and its practical application when you can continue to have the blind leading the blind because ChadGTP can see for us?

This is a disservice to programmers everywhere especially younger ones because it destroys the social reproduction of the capacity to build scalable software and replaces it with you guessed it machine rites. In practice it's the apotheosis of Conway's Law in the software industry. We build needlessly complex software that works coincidentally, and soon that software will be analyzed, modified, and ultimately created by a tool that is an overly complex statistical model that also works through the coincidence of statistical approximations.

[-] yogthos@lemmygrad.ml 3 points 1 day ago

How does AI help you actually traverse the concepts of React that you admit you don’t have nitty gritty knowledge of how they work in terms of designing your application?

It helps me by showing me the syntax and patterns that map to what I'm trying to do conceptually. By pointing me in the right direction, it saves me time searching for these things. I don't know why that's so difficult for you to understand.

For example React isn’t really optimized to crunch a ton of data performantly so if you’re getting constant data updates over a web socket from multiple points and you want some or all the changes to be reflected you’re gonna have a bad time vs something that has finer grained change controls out of the box such as Angular.

That's not a problem I'm solving, and in practice most UIs don't actually deal with a lot of data because the human user is the limiting factor. I'm working on an application that's doing fairly vanilla things here.

How does AI help you choose between functional and class based React components? How much of your application is doing typical developer copy-pasta instead of creating HOCs for similar functionalities? How did AI help you with that? How is AI helping apply concepts like SOLID into the design of your component tree? How does AI help you decide how to architect components and their children that need to have a lifecycle outside of the typical change-binding flow?

That's not a problem it's solving for me. As I've explained to you, I already have plenty of experience and I know how I like to structure applications. I'm used to using re-frame in Clojure, and I'm just looking how to do similar patterns in React. The AI does an excellent job of helping me discover them.

This in my opinion is the crux of the issue, AI cannot solve this problem for you nor can it reasonably explain it in a technical way beyond parroting the vagaries of what I said above. It cannot confer understanding of complex abstract concepts that are fuzzy and have grey areas. It can tell you something may not work explicitly but it cannot educate you realistically on the tradeoffs.

I don't need it to confer understanding of abstract concepts to me. I need it to show me common patterns within a particular library that map to the concepts I'm already familiar with. I don't need it to educate me on any trade offs.

Meanwhile, the problems you're fixating on are not inherent to AI in any way and have always existed in the software industry. Cargo culting is a term for a reason, no AI has been necessary for people do that, nor does absence of AI prevent this from happening. So, your whole argument is completely misdirected because AI is not the problem here. People who were going to cargo cult were gonna do that regardless of the tooling.

This is a disservice to programmers everywhere especially younger ones because it destroys the social reproduction of the capacity to build scalable software and replaces it with you guessed it machine rites.

That's absolute nonsense. It doesn't destroy the capacity to build scalable software any more than stack overflow does.

We build needlessly complex software that works coincidentally, and soon that software will be analyzed, modified, and ultimately created by a tool that is an overly complex statistical model that also works through the coincidence of statistical approximations.

You're saying this as if it wasn't the case long before AI showed up on the scene. You're making up a giant straw man of how you pretend software development works which is utterly divorced from what we see happening in the real world. The AI doesn't change this one bit.

[-] piggy@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

You're making up a giant straw man of how you pretend software development works which is utterly divorced from what we see happening in the real world. The AI doesn't change this one bit.

Commenting this under a post where an AI has spit out a dot product function optimization for an existing dot product function that's already ~150-250 lines long depending on architectural implementation of which there are about 6. The PR for which has an interaction that is two devs finger pointing about who is responsible for writing tests. The PR for which notes that the original and new function often don't give the correct answer. Just an amazing response. Chefs kiss.

What a wonderful way to engage with my post. You win bud. You're the smartest. This industry would never mystify a basic concept that's about 250 years old with a 716 line PR through its inability to communicate, organize and follow an academic discipline.

[-] yogthos@lemmygrad.ml 2 points 1 day ago

What a wonderful way to engage with my post. You win bud. You’re the smartest.

Amazing counterpoint you've mustered there when presented with the simple fact that all the problems you're describing have already been happening long before AI showed up on the scene. Way to engage in good faith dialogue. Bravo!

[-] piggy@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

I've never said that AI is the cause of those problems that's words you're putting in my mouth. I've said that AI is being used as a solution to those problems in the industry when in reality the use of AI to solve those problems exacerbates them while allowing companies to reap "productive" output.

For some reason programmers can understand "AI Slop" but if the AI is generating code instead of stories, images, audio and video it's no longer "AI Slop" because we're exalted in our communion with the machine spirits! Our holy logical languages could never encode the heresy of slop!

[-] yogthos@lemmygrad.ml 2 points 1 day ago

Ok, so if you agree the AI is not the source of those problems, then it's not clear what you're arguing about. Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that. I've given you concrete examples of how this tool is useful for me, you've just ignored that and continued arguing about the straw man you want to argue about.

The slop has always been there, and AI isn't really changing anything here.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that.

This is absolutely not true. Almost every programmer I know has had their company try to "AI" their documentation or "AI" some process only to fail spectacularly because the basis of what the AI does to data is either missing or doesn't have enough quality. I have several friends at the Lead/EM level take too much time out of their schedules to talk down a middle manager from sapping resources into AI boondoggles.

I've had to talk people off of this ledge, and lead that works under me (I'm technically a platform architect across 5 platform teams) actually decided to try it anyway and burn a couple days on a test run and guess what the results were garbage.

Beyond that the problem is that AI is a useful tool in IGNORING the problems.

I've given you concrete examples of how this tool is useful for me, you've just ignored that and continued arguing about the straw man you want to argue about.

I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman. If I were a feeling less charitable I could call the majority of your arguments non-sequitors to mine. I have never argued that AI isn't useful to somebody. In fact I'm arguing that it's dangerously useful for decision makers in the software industry based on how they WANT to make software.

If a piece of software is a car, and a middle manager wants that car to have a wonderful proprietary light bar on it and wants to use AI to build such a light bar on his wonderful car. The AI might actually build the light bar in a narrow sense to the basic specs the decision maker feels might sell well on the market. However the light bar adds 500lbs of weight so when the driver gets in the car the front suspension is on the floor, and the wiring loom is also now a ball of yarn. But the car ends up being just shitty enough to sell, and that's the important thing.

And remember the AI doesn't complain about resources or order of operations when you ask it do make a light bar at the same time as a cool roof rack, a kick ass sound system and a more powerful engine, and hey if the car doesn't work after one of these we can just ask it to regenerate the car design and then just have another AI test it! And you know what it might even be fine to have 1 or 2 nerds around just in case we have to painfully take the car apart only to discover we're overloading the alternator from both ends.

[-] yogthos@lemmygrad.ml 2 points 1 day ago

I'm talking about our discussion here. AI can be misused just like any tool, there's nothing surprising or interesting about that. What I'm telling you is that from my experience, it can also be a useful tool when applied properly.

I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman.

I've addressed your point repeatedly in this discussion.

In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.

And I'm once again going to point out that this has been happening for a very long time. If you've ever worked at a large corporation, then you'd see that they take monkeys at typewriter approach to software development. These companies don't care about code quality one bit, and they just want to have fungible developers whom they can hire and fire at will. I've seen far more nightmarish code produced in these conditions than any AI could ever hope to make.

The actual problem isn't AI, it's capitalist mode of production and alienation of workers. That's the actual source of the problems, and that's why these problems exist regardless of whether people use AI or not.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

The way that you're applying the tool "properly" is ultimately the same way that middle managers want to apply the tool, the only difference is that you know what you're doing as a quality filter, where the code goes and how to run it. AI can't solve the former (quality) but there are people working on a wholesale solution for the latter two. And they're getting their data from people like you!

In terms a productive process there's not as much daylight between the two use cases as you seem to think there is.

[-] yogthos@lemmygrad.ml 0 points 1 day ago

If people figure out how to automate the entire coding pipeline then power to them. I don't see this happening in the near future myself. In the meantime, I'm going to use tools that make my life better. Also, not sure why you'd assume people are getting data from me given that I run models locally with ollama. I find deepseek-coder works perfectly fine with local setup.

[-] piggy@hexbear.net 3 points 1 day ago

For everyone of you there's 1000 junior engineers running copilot.

[-] yogthos@lemmygrad.ml 0 points 1 day ago

Sure, but and before there were a 1000 junior engineers mindlessly copy/pasting stuff from stackoverflow till it sort of works.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

StackOverflow copypasta wasn't a productive processes that was seeking to remove the developer from the equation though.

This isn't about a tech scaling strategy of training high quality high productivity engineers vs "just throwing bodies at it" anymore. This is about the next level of "just throwing bodies at it", "just throwing compute at it".

This is something technically feasible within the next decade unless, inshallah, these models collapse from ingesting their own awful data, rather than improving.

[-] yogthos@lemmygrad.ml 0 points 1 day ago

StackOverflow copypasta very much did remove the developer from the equation. People would just mindlessly string code together without bothering to understand what they were doing or why the code worked. It has become a common practice in the industry at this point, and huge codebases have been erected using this method.

Every large coporation uses this method because they want to have fungible devs. Since developers with actual skill don't want to be treated as fungible cogs, the selection pressures ensure that people who can't get jobs with better conditions end up working in these places. They're just doing it to get a paycheck, and they basically bang their heads against the keyboard till something resembling working code falls out. I'll also remind you of the whole outsourcing craze which was basically exact same goal corps want to accomplish with AI now.

There's absolutely nothing new happening here that hasn't been going on for literally decades. What you're describing is already very much feasible and it's happening at scale.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

Every large coporation uses this method because they want to have fungible devs. Since developers with actual skill don't want to be treated as fungible cogs, the selection pressures ensure that people who can't get jobs with better conditions end up working in these places. They're just doing it to get a paycheck, and they basically bang their heads against the keyboard till something resembling working code falls out. I'll also remind you of the whole outsourcing craze which was basically exact same goal corps want to accomplish with AI now.

Damn that's crazy, imagine working a coding job for a paycheck! Soon you won't even be able to!

[-] yogthos@lemmygrad.ml 0 points 1 day ago

My point was that people working in large corps don't care about beautiful engineering, and they are writing exactly the kind of slop you decry.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

Yes and?

  1. They're getting paid.
  2. It's a job.
  3. They're humans who can choose to be better.
  4. They're humans who can choose to fight their bosses out of some idiotic love of the game to the detriment of their own mental health because they're crazy. (I'm describing myself).
  5. They're humans who can stall or break awful things from coming to pass by refusing to work on something or sabotaging it.

This is about a door to those possibilities closing, not about how many software developers are forced through it. I'm not going to cheer on an awful totalizing future dark age of technology simply because the current odds are bad.

And yeah this won't actually kill higher end devs in my understanding of the world, I'll be able to find a job. But, it will kill the social reproduction of people like me. In the same way that the iPad killed broad user-focused technological literacy from zoomers to millenials, LLMs will ultimately destroy the current level of developer-focused technological literacy. There won't even be guys who can't code their way out of a paper bag using StackOverflow or guys who memorize LeetCode solutions. It will just be old-heads powerful enough to avoid the cull and nobody else, until we die.

[-] yogthos@lemmygrad.ml 0 points 1 day ago

And as I already pointed out above, the problem here isn't with automation but with capitalism. In a sane system, automation would mean more free time for people, and less tedium. People are doing these jobs not because they want to be doing them, but because it's a way to survive in this shitty system.

Automation has been replacing jobs at an ever increasing rate ever since the industrial revolution started, and every time technological progress has been met with resistance. The whole idea that LLMs are going to kill social reproduction of developers is silly beyond belief. People who enjoy to code will do this because they enjoy doing it. The nature of work might change the same way it changed when we got compilers, garbage collectors, syntax highlighting, linters, and so on. There were people like you decrying all these things making exactly the same kind of argument how it's going to destroy the programmer profession, and how nobody will know how to write proper code anymore. I'm old enough to have seen this nonsense many times during my career.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

And as I already pointed out above, the problem here isn't with automation but with capitalism. In a sane system, automation would mean more free time for people, and less tedium. People are doing these jobs not because they want to be doing them, but because it's a way to survive in this shitty system.

There are certainly bad programming jobs, but programming jobs in general are extreme labor aristocracy. Yes people are chasing the bag, but they're certainly not "survival jobs". Within the system until you reach senior levels is no real discriminator between "bag chaser" and "person who is trying to learn", both these are going to get squad wiped.

There's certainly still going to be a path to being a SE. But it's going to be autodidact hobbyists who start extremely young. As a person who has been running Linux since 5th grade, who got a CCNA at 16, who has only had programming or network jobs since high school, this is the worst path because the reality of the career at scale murders your passion. If I don't age out I'm betting my next 10 years are going to be uncomfortably close to Player Piano, and that's something that's entirely dreadful. Instead of teaching juniors to program at scale while giving them boring CRUD tasks, I'll be communing with machine spirits so "they" can generate the basic crud endpoints and the component screens.

The reality of being a greybeard is that if you're close to retirement in this industry like my dad is, you're gonna do the same shit jobs as the bag chasers. They'll stick you in the basement and steal your stapler if you even make it past the vibe check interview. The only way to avoid this is to be a lifer somewhere, but that in itself is a challenge.

The difference between the previous developments and now, is that it may improve productivity now in your case and the case of the 1000 juniors, but tomorrow it's going to actually undercut demand for people. Building a system that builds and deploys applications has been the goal of several public and private projects I've been privy to. I agree this exact use-case that you linked is an example of a way to not have to learn ANTLR or how an AST works and flip a coin if it works. In practice though, this is step 1. Code generation has improved significantly in the last year alone across the whole LLM ecosystem. The goal isn't' to write maintainable code or readable code, the goal is to write deploy-able code with 90% feature coverage. Filling the last 10% with freelancers or in house engs. depending on scale. To me that's a worse job than the job I have now, at least now I can teach others how to do what I do. If that's taken away from me I'm not fucking doing this job anymore. I don't care about computers because in reality this job at scale is about convincing morons to stop micromanaging how you build things.

[-] yogthos@lemmygrad.ml 1 points 1 day ago* (last edited 1 day ago)

What you're afraid of is precisely what was tried with outsourcing dev jobs. That proved to work in some areas where you have very boring crud apps, but was a complete failure in others. I expect LLMs are just going to work out in a very similar fashion.

Meanwhile, the most enjoyable coding I've done was never done for money. If anything, I can see AI taking over work turning programming away from being a career and into a way for people to express themselves artistically the way you see with demo scene, live coding, generative art and so on. I don't see that as a bad thing.

[-] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

What you're afraid of is precisely what was tried with outsourcing dev jobs. That proved to work in some areas where you have very boring crud apps, but was a complete failure in others. I expect LLMs are just going to work out in a very similar fashion.

Okay but like again, I'm not afraid of losing my job. I'm afraid that we're going to lose real capability as a society. It's how our oligarchs are practically morons compared to past oligarchs who built hundreds of libraries, or how we don't have the real capacity in the US to build rail.

I'm currently working as a platform architect coordinating 5 teams over multiple products building a platform for authoring, publishing and managing rich educational courses across multiple different grade levels. I do most of the greenfield development still, I personally manage a DSL and tools for it, while figuring out platform requirements and timelines for other teams including my own. I used to work on a real time EEG system doing architecture and signal processing. I've architected and implemented medical logistics platforms. I've been a first engineer at a couple of startups. I've literally written purpose built ORMs, schedulers, middleware frameworks, and query frameworks from scratch. I've worked at almost every major common role at a principal level except security (which is mostly fake) and embedded so front end, back end, database optimization/integration, infrastructure, machine code on JVM and X86, and distributed computing. I haven't work in niches like networking, industrial, ML or quantum, I'd only really want to explore quantum or networking in reality. But quantum is something you typically need PhDs for otherwise it's gonna be a bit grunty. OSS may bring up engineers for some of these roles, but in practice the majority of OSS projects don't reach the level of complexity that I've worked at -- the ones that do aren't community projects they're corporate ones.

Very few people can step into my shoes, most principal engineers I've met average out at a large project where they implemented a strangler once or twice. The system currently has a hard time reproducing me, if the bottom falls out it's gonna be good game. I'm happy that LLMs are helping you rediscover your passion, but the kind of stuff you're talking about are toys. Personally they're not fun, they're mostly boring, I enjoy building large technical systems in complex problem spaces in a high level reproducible way. Everything else gets stale quickly. I've built out systems where if you blow on the code the tests turn red without test maintenance and creation being a burden. The goal was high value test in 5 minutes in that system. The future I see is that everything is just shittier because the skill that is hard to find and is dying is understanding the essential complexity at the 10,000 ft view, the 100 ft view, the 1 ft view, and the 1 micrometer view. I can barely find developers who can innately understand essential complexity at one of those view points. I've met about 20 who can do all 4 and I've met maybe like 400-ish devs in my life.

The only passion project I wanted to start I basically decided to call off because if successful it would be bad for the world. I wanted to build a high level persona management software that could build swarms in the tens of thousands without being discovered.

If LLM removes programming as a job, might be nice, but in practice it's just gonna mean more people on the struggle bus.

[-] jurassicneil@hexbear.net 1 points 17 hours ago* (last edited 17 hours ago)

That's just a straw man, because there's no reason why you wouldn't be looking through your code. What LLM does is help you find areas of the code that are worth looking at.

It's not a strawman because classifying unperformant code is a different task than generating performant replacement code. LLM can only generate code via it's internal weights + input it doesn't guarantee that that code is compilable, performant, readable, understandable, self documenting or much of anything.

The performance gain here is coincidental simply because the generated code uses functions that call processor features directly rather than get optimized into processor features by a compiler. LLM classifiers are also statistically analyzing the AST for performance they aren't actually performing real static analysis of the AST or it's compiled version. It doesn't calculate a BigO or really know how to reason through this problem, it's just primed that when you write the for loop to sum, that's "slower" than using _mm_add_ps. It doesn't even know which cases of the for loop compile down to a _mm_add_ps instruction on which compilers and which optimization levels.

Lastly you injected this line of reasoning when you basically said "why would I do this boring stuff as a programmer when I can get the LLM to do it". It's nice that there's a tool that you can politely ask to parse your garbage and replace with other garbage that happens to use a function that's more performant. But not only is this not Software Engineering, but a performant dot product is a solved problem at EVERY level of abstraction. This programming equivalent of tech bros reinventing the train every 5 years.

The fact that this is needed is a problem in and of itself with how people are building this software. This is machine spirit communion with technojargon. Instead of learning how to vectorize algorithms you're feeding your garbage code through a LLM to produce garbage code with SIMD instructions in it. That is quite literally stunting your growth as a Software Engineer. You are choosing to ignore learning how things actually work because it's too hard to parse through the existing garbage. A SIMD dot product algo is literally a 2 week college junior homework assignment.

Understanding what good uses for it are and the limitations of the tech is far more productive than simply rejecting it entirely.

I quite literally pointed several limitations in the post you replied to and in this post from a Software Engineering perspective.

Hey so I read your comments and found them insightful. Me being a Software Engineer who just started his first job, what would be your advice for the right approach to grow and learn as a software engineer? Both in general and with respect to using LLMs while learning/coding.

[-] yogthos@lemmygrad.ml 2 points 1 day ago

I’m afraid that we’re going to lose real capability as a society.

The US has far bigger problems than LLMs to worry about in the near future. Personally, I'd be far more worried about the rate of deindustrialization in the states, lack of people who know trades, engineers, farmers, and so on. All of that is far more crucial than the software industry. Meanwhile, even if people started losing this expertise, it's not like it can't be learned again if needed. The whole software industry has only existed for a handful of decades, and society got on just fine before it appeared. This is just complete hyperbole I'm afraid.

What I think is most likely to happen with serious engineering going forward is that the human aspect of the work will shift towards writing formal specifications that encode the desired constraints for the system, including things like memory usage, runtime complexity, and so on, and then having LLMs figure out how to generate code that passes the spec.

Incidentally, you can do this stuff without LLMs as well, for example Barliman is a program synthesis engine that is able to take a signature for a function and figure out an implementation, it can even compose functions it already wrote to solve more complex problems. Combining something like that with LLMs could be very effective.

I see this is as a similar advancement as creation of high level languages. Plenty of people moaned that nobody learned assembly anymore when C showed up, and making very similar arguments to the ones you're making. Then people started moaning about garbage collection, and how you weren't a real programmer if you weren't managing memory by hand. Every time a new technology comes around that makes programming easier and more accessible, there are inevitably people screaming that the sky is falling. LLMs is just the latest iteration of this phenomenon.

And more people struggling on the bus because we have more automation is a result of capitalist relations. That's where the ire should be directed.

this post was submitted on 28 Jan 2025
67 points (95.9% liked)

technology

23495 readers
287 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS