38

cross-posted from: https://lemmy.ca/post/61948688

Excerpt:

"Even within the coding, it's not working well," said Smiley. "I'll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven't engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence."

Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.

"We don't know what those are yet," he said.

One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That's the kind of thing that needs to be assessed to determine whether AI helps an organization's engineering practice.

To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.

"It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."

All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.

"Coding works if you measure lines of code and pull requests," he said. "Coding does not work if you measure quality and team performance. There's no evidence to suggest that that's moving in a positive direction."

top 36 comments
sorted by: hot top controversial new old
[-] justsomeguy@lemmy.world 8 points 2 weeks ago

Being in an economic bubble during the age of (over)information is really weird. We're getting two articles per day confirming that we're in a big ass bubble but it just keeps on going. I preferred not really knowing how bad things are.

[-] grue@lemmy.world 5 points 2 weeks ago

Everybody knew dot-coms in 2000 and houses in 2007 were bubbles, too. But they kept investing anyway, because they didn't know when it would pop and FOMO is a helluva drug.

Also, something to keep in mind: https://awealthofcommonsense.com/2014/02/worlds-worst-market-timer/

[-] thisbenzingring@lemmy.today 2 points 2 weeks ago

prepare for the burst so you can jump in and get the deals of a lifetime

[-] greyscale@lemmy.sdf.org 5 points 2 weeks ago

With what fucking capital, bro. We've been squeezed already.

[-] thisbenzingring@lemmy.today 4 points 2 weeks ago* (last edited 2 weeks ago)

my wife and I didn't have but a couple grand saved up, in 2011 we bought our first house at the beginning of the end of the 2008 bubble. We got a house that was extremely cheap compared to the value

when the opportunity comes, make sure to take it

[-] rizzothesmall@sh.itjust.works 8 points 2 weeks ago

AI works great. I work in the sphere of production defect detection in manufacturing and it's been working pretty well for a decade or more to predict machine failures and spot defective materials or products.

LLMs as business digital yes man is what doesn't work.

[-] chuckleslord@lemmy.world 4 points 2 weeks ago

Yeah, unfortunately the marketing people have made the LLM synonymous to AI. It's a damn shame.

[-] Naia@lemmy.blahaj.zone 1 points 2 weeks ago

LLMs have a use case, it's just really limited and the vast majority of what companies, and people broadly, use it for is either not the best case for it or not even something it can/should do.

If you know how to use them they can save time. You still need to validate everything it gives you, but as a developer I can use one to generate small code snippets or give it documentation and ask questions as a quick reference.

But these are not automation tools. They are not worker replacements. and they aren't replacements for research even if they can get you started on research..

LLMs, and neural nets in general, can never be AGI no matter how much companies wish it could be.

[-] Tamps@feddit.uk 5 points 2 weeks ago

Or to put it another way, AI is making it faster and easier to do the wrong thing in the wrong way at scale.

I also wonder what the plan is when the token cost starts going upward. The bill for all this venture capital will come due eventually and someone has to pay for it.

[-] TrippinMallard@lemmy.ml 3 points 2 weeks ago

Socialized losses are the norm. People's energy bills are already paying for the nearby data centers .

[-] e461h@sh.itjust.works 1 points 2 weeks ago

Yep, they’ll weaponize it to take jobs and then hit the public with the bill when the bubble bursts. Capitalism is both chilling and demoralizing.

[-] reddig33@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

Socializing the losses isn’t real capitalism. The US has some weird “socialism for the oligarchy” thing going on.

[-] eldebryn@lemmy.world 2 points 2 weeks ago

That's what happens when you have unregulated markets and corporations.

Liberals that still suckle Reagans tit worship the almighty "free market" however top economists will tell you that without government control capitalism ends up Being an oligopoly often with feudal characteristics like we see in the digital landscape.

When corpos have no restrictions, they don't compete with each other. They end up merging into a monopoly and drain all the peasants/slaves who don't own anything.

[-] RememberTheApollo_@lemmy.world 2 points 2 weeks ago

Or to put it another way, AI is making it faster and easier to do the wrong thing in the wrong way at scale.

…and absolve the operator of any responsibility, apparently.

Oh, that was the AI doing something wrong. shrugs and just keeps doing what they’re doing.

[-] thebestaquaman@lemmy.world 5 points 2 weeks ago

It’s 3.7x more lines of code that performs 2,000 times worse than the actual SQLite.

Pretty much my experience with LLM coding agents. They'll write a bunch of stuff, and come with all kinds of arguments about why what they're doing is in fact optimal and perfect. If you know what you're doing, you'll quickly find a bunch of over-complicating things and just plain pitfalls. I've never been able to understand the people that claim LLMs can build entire projects (the people that say stuff like "I never write my own code anymore"), since I've always found it to be pretty trash at anything beyond trivial tasks.

Of course, it makes sense that it'll elaborate endlessly about how perfect its solution is, because it's a glorified auto-complete, and there's plenty of training data with people explaining why "solution X is better".

[-] Dojan@pawb.social 1 points 2 weeks ago

I saw a vibe coded PR the other day. So much redundant code, lots of comments making assumptions and questions. It’s a mess.

Glad it didn’t land in my lap but the person who is now responsible for steering that up is already quite busy and wasting their time with this feels shit.

[-] thebestaquaman@lemmy.world 3 points 2 weeks ago

One of the worst things about this is that the person vibe coding just ends up shitting on the reviewers time. Like... you couldn't even bother to write a real PR, and now you want me to spend time filtering your shit? Fuck off.

[-] org@lemmy.org 2 points 2 weeks ago

To many people don’t know how to prompt AI, and review.

[-] Dojan@pawb.social 3 points 2 weeks ago

Too many people are willingly paying anti-democratic billionaires to outsource their thinking and agency.

[-] org@lemmy.org 0 points 2 weeks ago

Too many people know their job is only going to last 6 months before the next round of layoffs, and that talent and hard work has never been the way to keep a job in the tech industry… so why try?

[-] Dojan@pawb.social 1 points 2 weeks ago

Not really a valid excuse in this case as we aren’t really experiencing layoffs here. Au contraire, our company is hiring. I’m not in the U.S.

Still think that letting language models controlled by billionaire paedophiles and wannabe dictators is a poor idea, regardless of how fed up one is with one’s job.

[-] org@lemmy.org 0 points 2 weeks ago

Where is “here?”

And, if you want to bring pedophiles into it, most of what you touch on a daily basis involved a billionaire pedophile at some point. You just sound lazy at this point.

[-] EncryptKeeper@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

Did you just call them lazy after making the argument that building talent and hard work is not worth doing in the modern tech landscape?

[-] org@lemmy.org -1 points 2 weeks ago

The argument is lazy to blanket everything in “pedophile” instead of actually talking about the issue.

[-] Dojan@pawb.social 0 points 2 weeks ago

Oh absolutely, and we can do our best to swear off of that but thanks to them worming their way in like a cancer in every part of society, shaping it to benefit them, that's just the nature of taking part in society. The ones in power have always, and will always continue to exploit us for as long as we let them.

All the more reason to not outsource our thinking to their machines. Governments are already doing it, getting caught red-handed acting on reports that never existed. Why rely on that when the option not to is so readily available?

[-] org@lemmy.org -1 points 2 weeks ago

Ehhh… this sounds more like blanket ai-hate and less about you actually caring. You’re already in their cloud. I doubt you run bare metal. You probably use GitHub. Etc. caring on one hand and not on the other means nothing.

I’ll continue farming out bullshit tasks to AI while I play with my cat and prepare for the next round of layoffs, rather than giving my soul to a company who doesn’t actually care about me.

[-] thisbenzingring@lemmy.today 1 points 2 weeks ago

I tried using an LLM for making an 3d object in openscad, an open source CAD app for making 3d printable objects

its basic and uses an open source language. The LLM should have infinate examples and access

but after 4 tries I gave up and just did it myself, sure the crap the LLM gave me helped form a general setup but I had to spend 2x as much time fixing the code then it did writing it from scratch

I haven't tried using LLM for anything else, that failure told me everything I needed to know about its ability to do basic shit

[-] JeeBaiChow@lemmy.world 1 points 2 weeks ago

This. If users are spending so much time explaining in detail to an llm what they want the output to do, theyre better off doing it themselves. Code snippets were already solved with search.

[-] Deestan@lemmy.world 1 points 2 weeks ago

No bro the new model from 3 months ago is infinite gooder than what they tested. In 12-18 months we get agi or some shit i dunno just 10 more billions$ bro.

[-] ElectricAirship@lemmy.dbzer0.com 1 points 2 weeks ago

Companies and governments told us that an energy transition is "too costly" or "too disruptive to society" but when it comes to AI disrupting and even ending people's lives...

They just say, "deal with it."

[-] RedGreenBlue@lemmy.zip 1 points 2 weeks ago* (last edited 2 weeks ago)

Ai is currently a glorified search engine. But expensive.

[-] scarabic@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

Yeah. I use it at work as a glorified search engine of all company wikis and docs and tickets. It’ll work basically just as a search or it can also summarize - not terribly.

Our AI meeting notes are also pretty good. I’m always impressed at how it leaves out personal chit-chat and anything negative we say about someone who isn’t present.

[-] floofloof@lemmy.ca 1 points 2 weeks ago

This article says that the AI-coded Rust rewrite of SQLite ran 2,000 times slower, but the linked source article says it ran more than 20,000 times slower. Muddling up 2,000 and 20,000 seems a bit sloppy for journalism about code performance.

[-] ratsnake@lemmy.blahaj.zone 3 points 2 weeks ago* (last edited 2 weeks ago)

The article cited by The Register cites this more detailed analysis in turn: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

Performance of the AI-generated version was 20,000 times slower on one specific benchmark, but "only" about 2,000 times slower when averaging over multiple different benchmarks (which is, imo, a better measurement of the code's quality).

So I suppose The Register pulled from multiple sources (as you should) and just linked to the most top-level of all of them.

[-] floofloof@lemmy.ca 1 points 2 weeks ago

Thanks for that link. It has a lot more detail.

[-] Codpiece@feddit.uk 1 points 2 weeks ago

Maybe they were using AI to write it.

this post was submitted on 19 Mar 2026
38 points (100.0% liked)

Technology

83529 readers
849 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS