13
top 50 comments
sorted by: hot top controversial new old
[-] black_flag@lemmy.dbzer0.com 4 points 1 month ago

I think it's going to require a change in how models are built and optimized. Software engineering requires models that can do more than just generate code.

You mean to tell me that language models aren't intelligent? But that would mean all these people cramming LLMs in places where intelligence is needed are wasting their time?? Who knew?

Me.

[-] eager_eagle@lemmy.world 1 points 1 month ago

I have a solution for that, I just need a small loan of a billion dollars and 5 years. #trustmebro

[-] black_flag@lemmy.dbzer0.com 1 points 1 month ago

Only one billion?? What a deal! Where's my checkbook!?

[-] TuffNutzes@lemmy.world 3 points 1 month ago

The LLM worship has to stop.

It's like saying a hammer can build a house. No, it can't.

It's useful to pound in nails and automate a lot of repetitive and boring tasks but it's not going to build the house for you - architect it, plan it, validate it.

It's similar to the whole 3D printing hype. You can 3D print a house! No you can't.

You can 3D print a wall, maybe a window.

Then have a skilled Craftsman put it all together for you, ensure fit and finish and essentially build the final product.

[-] natecox@programming.dev 1 points 1 month ago

I hate the simulated intelligence nonsense at least as much as you, but you should probably know about this if you’re saying you can’t 3d print a house: https://youtu.be/vL2KoMNzGTo

[-] TuffNutzes@lemmy.world 4 points 1 month ago

Yeah I've seen that before and it's basically what I'm talking about. Again, that's not "printing a 3D house" as hype would lead one to believe. Is it extruding cement to build the walls around very carefully placed framing and heavily managed and coordinated by people and finished with plumbing, electrical, etc.

It's cool that they can bring this huge piece of equipment to extrude cement to form some kind of wall. It's a neat proof of concept. I personally wouldn't want to live in a house that looked anything like or was constructed that way. Would you?

[-] scarabic@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

it's basically what I'm talking about

Well, a minute ago you were saying that AI worship is akin to saying

a hammer can build a house

Now you’re saying that a hammer is basically the same thing as a machine that can create a building frame unattended? Come on. You have a point to be made here but you’re leaning on the stick a bit too hard.

[-] natecox@programming.dev 0 points 1 month ago

I mean, “to 3d print a wall” is a massive, bordering on disingenuous, understatement of what’s happening there. They’re replacing all of the construction work of framing and finishing all of the walls of the house, interior and exterior, plus attaching them and insulating them, with a single step.

My point is if you want to make a good argument against LLMs, your metaphor should not have such an easy argument against it at the ready.

[-] DireTech@sh.itjust.works 4 points 1 month ago

Did you see another video about this? The one linked only showed the walls and still showed them doing interior framing. Nothing about windows, electrical, plumbing, insulation, etc.

What they showed could speed up construction but there are tons of other steps involved.

I do wonder how sturdy it is since it doesn’t look like rebar or anything else is added.

load more comments (6 replies)
[-] poopkins@lemmy.world 1 points 1 month ago

Spoken like a person who has never been involved in the construction of a home. It's effectively doing the job of (poorly) pouring concrete which isn't the difficult or time consuming part.

load more comments (1 replies)
[-] amju_wolf@pawb.social 1 points 1 month ago

Huh? They just made the walls. Out of cement.

Making the walls of a house is one of the easiest steps, if not the easiest. And these would still need insulation, electrical, etc. And they look like shit.

[-] frog_brawler@lemmy.world 0 points 1 month ago

You’re making a great analogy with the 3D printing of a house.

However, if we consider the 3D printed house scenario; that skilled craftsman is now able to do things on his own that he would have needed a team for in the past. Most, if not all, of the less skilled members of that team are not getting any experience within the craft at that point. They’re no longer necessary when one skilled person can now do things on their own.

What happens when the skilled and highly experienced craftsmen that use AI as a supplemental tool (and subsequently earn all the work) eventually retire, and there’s been no juniors or mid-levels for a while? No one is really going to be qualified without having had exposure to the trade for several years.

[-] TuffNutzes@lemmy.world 1 points 1 month ago

Absolutely. This is a huge problem and I've read about this very problem from a number of sources. This will have a huge impact on engineering and information work.

Interestingly enough, A similar shortage occurred in the trades when information work was up and coming and the trades were shunned as a career path for many. Now we don't have enough plumbers and electricians. Trades are now finding their the skills in high demand and charging very high rates.

[-] ChokingHazard@lemmy.world 2 points 1 month ago

The trades problem is a typical small business problem with toxic work environments. I knew plenty that washed out of the trades because of that. The “nobody wants to work anymore” tradesmen but really it’s “nobody wants to work with me for what I’m willing to pay”

[-] TuffNutzes@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

I don't doubt that that's a problem either in some of those small businesses.

I have a great electrician that I call all the time. He's probably in his late 60s. It's definitely more of a rough and tumble work environment than IT work, for sure, but he's a good guy and he pays his people well and he charges me an arm and a leg.

But we talk about it and he tells me about how the same work he would have charged a quarter the price just 10 years ago. And honestly, he's one of the more affordable ones.

So it definitely seems like the trades is the place to be these days with so few good ones around. But yeah you have to pick and choose who's mentoring you.

[-] dreadbeef@lemmy.dbzer0.com -1 points 1 month ago* (last edited 1 month ago)

3d printed concrete houses exist. Why can't you 3d print a house? Not the best metaphor lol

[-] Nalivai@lemmy.world 1 points 1 month ago

No they aren't. With enough setup and very unique and expensive equipment, you can pour shitty concrete walls that will be way more expensive and worse than if you did it normally. That will give you 20% of the house, at best. 20% of not very good of a house.

[-] dantheclamman@lemmy.world 2 points 1 month ago

LLMs are useful to provide generic examples of how a function works. This is something that would previously take an hour of searching the docs and online forums, but the LLM can do for very quickly, and I appreciate. But I have a library I want to use that was just updated with entirely new syntax. The LLMs are pretty much useless for it. Back to the docs I go! Maybe my terrible code will help to train the model. And in my field (marine biogeochemistry), the LLM generally cannot understand the nuances of what I'm trying to do. Vibe coding is impossible. And I doubt the training set will ever be large or relevant enough for the vibe coding to be feasible.

[-] corsicanguppy@lemmy.ca 3 points 1 month ago

Vibe coding

The term for that is actually 'slopping'. Kthx ;-)

[-] isaaclyman@lemmy.world 1 points 1 month ago

Clearly LLMs are useful to software engineers.

Citation needed. I don’t use one. If my coworkers do, they’re very quiet about it. More than half the posts I see promoting them, even as “just a tool,” are from people with obvious conflicts of interest. What’s “clear” to me is that the Overton window has been dragged kicking and screaming to the extreme end of the scale by five years of constant press releases masquerading as news and billions of dollars of market speculation.

I’m not going to delegate the easiest part of my job to something that’s undeniably worse at it. I’m not going to pass up opportunities to understand a system better in hopes of getting 30-minute tasks done in 10. And I’m definitely not going to pay for the privilege.

[-] skisnow@lemmy.ca 2 points 1 month ago

I've found them useful, sometimes, but nothing like a fraction of what the hype would suggest.

They're not adequate replacements for code reviewers, but getting an AI code review does let me occasionally fix a couple of blunders before I waste another human's time with them.

I've also had the occasional bit of luck with "why am I getting this error" questions, where it saved me 10 minutes of digging through the code myself.

"Create some test data and a smoke test for this feature" is another good timesaver for what would normally be very tedious drudge work.

What I have given up on is "implement a feature that does X" questions, because it invariably creates more work than it saves. Companies selling "type in your app idea and it'll write the code" solutions are snake-oil salesman.

[-] jj4211@lemmy.world 1 points 1 month ago

I have been using it a bit, still can't decide if it is useful or not though... It can occasionally suggest a blatantly obvious couple of lines of code here and there, but along the way I get inundated with annoying suggestions that are useless and I haven't gotten used to ignoring them.

I mostly work with a niche area the LLMs seem broadly clueless about, and prompt driven code is almost always useless except when dealing with a super boilerplate usage of a common library.

I do know some people that deal with amazingly mundane and common functions and they are amazed that it can pretty much do their jobs, but they never really impressed me before anyway and I wondered how they had a job...

[-] Feyd@programming.dev 1 points 1 month ago

I don't use one, and my coworkers that do use them are very loud about it, and worse at their jobs than they were a year ago.

[-] hisao@ani.social -1 points 1 month ago

If my coworkers do, they’re very quiet about it.

Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like "yeah, I think it can sometimes be useful when used carefully and I sometimes use it too". While in reality it would mean that it actually writes 95% of code under my micromanagement.

[-] Feyd@programming.dev 1 points 1 month ago

Wut. At software shops the prevailing atmosphere is that you should use it and broadcast it as much as possible. This person's experience is not normal

[-] PixelatedSaturn@lemmy.world 0 points 1 month ago

Good article, I couldn't agree with it more, it's exactly my experience.

The tech is being developed really fast and that is the main issue when taking about ai. Most ai haters are using the issues we might have today to discredit the while technology which makes no sense to me.

And this issue the article talks about is apparent and whoever solves it will be rich.

However, it's interesting to think about the issues that come next.

[-] HarkMahlberg@kbin.earth 0 points 1 month ago

It's true, the tech will get better in the future, we just need to believe and trust the plan.

Same thing with crypto and NFT's. They were 99% scam by volume, but who wouldn't love moving their life savings into a digital ecosystem controlled by a handful of rich gambling addicts with no consumer protections? Imagine, you'll never need to handle dirty paper money ever again, we'll just put it all in a digital wallet somewhere controlled by someone else coughmastercardcough.

And another thing, we were too harsh on the Metaverse. Sure, spending 8 hours in VR could make you vomit, and the avatars made ET for the Atari look like Uncharted 4, but it was just in its infancy!

I too want to outsource all my critical thinking to a chatbot controlled by an wealthy insular narcissist who throws Nazi salutes. The technology just needs time to mature. Who knows, maybe it can automate the exile of birthright citizens for us too!

/s

[-] PixelatedSaturn@lemmy.world 1 points 1 month ago

That's exactly the hyperbole I was talking about. Your post is full of obvious fallacies, but the fact that you are pushing everything to the absolutes is the silliest one.

[-] frezik@lemmy.blahaj.zone 0 points 1 month ago

To those who have played around with LLM code generation more than me, how are they at debugging?

I'm thinking of Kernighan's Law: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." If vibe coding reduces the complexity of writing code by 10x, but debugging remains just as difficult as before, then Kernighan's Law needs to be updated to say debugging is 20x as hard as vibe coding. Vibe coders have no hope of bridging that gap.

[-] very_well_lost@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I've been experimenting with it a lot lately.

In my experience, it's worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.

For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you're a dev who's desperate enough to ask AI for help debugging something, you probably don't know what's wrong either, so it won't be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time... then you go back to the LLM with more context and ask it to try again. It's second suggestion will sound even more confident than the first, ("Aha! I see the real cause of the issue now!") but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.

Rinse and repeat this cycle enough times until your manager is happy you've hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.

[-] hietsu@sopuli.xyz 1 points 1 month ago* (last edited 1 month ago)

I have next to zero experience about coding (unless you count a few months of Borland Delphi work back in 00’s, which you shouldn’t). Yet, I’ve managed to create half a dozen really useful tools for my work, and a few more for my hobbies too.

Inflection point for me was Gemini 2.5 Pro. Before that I was only successful with smaller scripts, using ChatGPT mostly. But with Gemini I was able to do Deep Research as the initial step to plan out the overall architecture, interfaces, technologies etc. and finetune the actual coding prompt using that info.

Crucial step after first generated (buggy) version is to copy paste the code and errors to ChatGPT and/or Grok to get their take on it, then feed back those ideas to Gemini again. Some 5-10 iterations of this and I usually have a fully functional application or a component of bigger piece software. Problems at the moment usually arise if any particular file exceeds ~800 lines, and when there are many many iterations. Then LLMs tend to get forgetful, dropping out comments, reintroducing faults from earlier iterations etc. Better to start a new session at that point.

Thinking of LLMs as just a lossy compression algo for all human knowledge, the parallel use of LLMs makes kind of sense: All the companies use approximately the same data in their training, but end up having a bit different looking ”lossy big picture” in the end. But if I ”look at all these pictures” side by side I can perhaps see more detail. Or that some of the pictures are fuzzy on one location but one is much clearer.

LLM’s seem to be very good to at spotting the correct solution when they are given a couple options or hypothesis about an cause of an issue. Most surprising to me is that Grok has been the one to solve majority of the most stubborn bugs that other have gotten stuck to.

With (Edit:) Gemini I just had my first ”hole in one” where it generated a flawless ~500 line web app in the very first try. Just gave it my Git codebase as a zip file and asked for a new module that interfaces the existing stuff. Wild times.

[-] Pechente@feddit.org 0 points 1 month ago

Definitely not good. Sometimes they can solve issues but you gotta point them in the direction of the issue. Other times they write hacky workarounds that do the job for the moment but crash catastrophically with the next major dependency update.

[-] HarkMahlberg@kbin.earth 1 points 1 month ago

I saw an LLM override the casting operator in C#. An evangelist would say "genius! what a novel solution!" I said "nobody at this company is going to know what this code is doing 6 months from now."

It didn't even solve our problem.

load more comments (13 replies)
[-] 0x01@lemmy.ml -1 points 1 month ago

I use it extensively daily.

It cannot step through code right now, so true debugging is not something you use it for. Most of the time the llm will take the junior engineer approach of "guess and check" unless you explicitly give it better guidance.

My process is generally to start with unit tests and type definitions, then a large multipage prompt for every segment of the app the llm will be tasked with. Then I'll make a snapshot of the code, give the tool access to the markdown prompt, and validate its work. When there are failures and the project has extensive unit tests it generally follows the same pattern of "I see that this failure should be added to the unit tests" which it does and then re-executes them during iterative development.

If tests are not available or if it is not something directly accessible to the tool then it will generally rely on logs either directly generated or provided by the user.

My role these days is to provide long well thought out prompts, verify the integrity of the code after every commit, and generally just kind of treat the llm as a reckless junior dev. Sometimes junior devs can surprise you, like yesterday I was very surprised by a one shot result: asking for a mobile rn app for taking my rambling voice recordings and summarize them into prompts, it was immediately remarkably successful and now I've been walking around mic'd up to generate prompts.

load more comments
view more: next ›
this post was submitted on 15 Aug 2025
13 points (100.0% liked)

Technology

75973 readers
462 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS