"Even within the coding, it's not working well," said Smiley. "I'll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven't engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence."
Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.
"We don't know what those are yet," he said.
One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That's the kind of thing that needs to be assessed to determine whether AI helps an organization's engineering practice.
To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.
"It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."
All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.
"Coding works if you measure lines of code and pull requests," he said. "Coding does not work if you measure quality and team performance. There's no evidence to suggest that that's moving in a positive direction."
So is this just early adaptation problems? Or are we starting to find the ceiling for Ai?
Early adaptation and rushed implementation. There may be a bubble bursting for the businesses who tried to “roll out something fast that is good enough to get subscribers for a few months so we can cash in.” However, this is just the very beginning of AI.
This isn't the "very beginning", that was either 70 or 120 years ago, depending on whether you're counting from the formalization of "AI" as an academic discipline with the advent of the Markov Decision Process or the earlier foundational work on Markov Chains.
Chatbots are old-hat, I was playing around with Eliza back in the 90's. Hell, even Large Language Models aren't new, the transformer architecture they're based on is almost 10 years old and itself merely a minor evolution of earlier statistical and recurrent neural network language processing models. By the time big tech started ramping up the "AI" bubble in 2024, I had already been bored with LLMs for two years.
There's no "early adaptation" here, just a rushed and wildly excessive implementation of a very interesting but fundamentally untrustworthy tech with no practical value proposition for the people it is nevertheless being sold to.
It’s the beginning of AI in terms of where it will be.
What's the pathway that you see from the current slop machine to something that will provide a Return on Investment. I haven't heard anyone credible willing to go out on the limb of saying that there is one, but maybe you will convince me.
I think when you introduce a question like that you’ve already said that no matter what the person answers, you will find a way to argue against it. So, I’m choosing not to interact with you.
The beauty of the scientific method is that it can change when presented with new data or a novel interpretation of existing data. I much prefer science to hype and feelings. You provide me accurate convincing arguments for how we get from the current system to an actual Artificial Intelligence, or something that roughly approximates it I am all ears. My take is that AI is the new cold fusion, it's always going to be a few years and a few hundred billion dollars away from reality. But what do I know, I'm just an idiot on the internet.
I’m not interested in trying to change the mind of someone who I feel has already made up their mind.
If you can prove to me, by linking to past conversations, that you have the ability to change your mind when new evidence is presented, then I will attempt to do so. But until then, I will choose not to engage in such activities with you.
Oh precious. You want me to prove to you that someone presented a viewpoint that was diametrically opposed to my own and then successfully argued me around to their way of thinking? It hasn't happened yet, not on this platform, and I shall not be linking this profile to other platforms I comment on where I have had convincing arguments sway my point of view. But surely you will be the first, you're better than all my other interlocuters right?
Exactly. In your own words you’re incapable of changing your mind when new evidence is presented. And so why would I want to try when I know that no matter what I say, you will fight against it because winning is more important to you than having accurate views.
Oh I am afraid misinterpeting the information provided to you doesn't bode any better for you than any other vacuous AI shill I have talked to.
Let's try rewording this into something you can comprehend: Lemmy is one of the avenues of discussion I use. On Lemmy I have mainly talked about LLMs (you call them AI, I don't believe for a moment Intelligence is characterised by making lots of mistakes then trying to select which one is least wrong). On other platforms I have discussed other topics, I have been convinced by arguments in relation to other matters. I use Lemmy because it affords me a degree of seperation from my other online activity. I will not surrender that seperation simply to make you feel more comfortable that your poor arguments and shoddy data have any hope of proving adequate.
Let me provide a super clear summary of my position, the scanty insubstantial benefits of LLMs are being overhyped by shills and conmen to prop up the "AI" bubble. Enough large businesses and governments have bought into it now that number must go up is the only reason it's having the societal impact it is. At some point someone is going to be asked to pay their bill and the whole shaky edifice will collapse, when it does we will see something closer to a true cost of this technology being exposed this will reveal that it's not in any way sustainable. Economies around the world will take decades to recover and public trust will be effectively nil during that time. Once economies have recoveredits highly likely that productive economies will be ascendant, the vast majority of the "western" nations will be unable to compete and will further devolve into oligarchies. None of this is worth being able to generate pictures of anthropomorphic molerats with big racks, or to create incorrect PowerPoint presentations, or to vibe code an application that works 70% of the time and deletes the contents of the storage medium it occupies the other 30%.
I wonder, are you an Evangelical Christian and / or Young Earth Creationist? Actually never mind, bored of you and a bit sad now.
Oh I see. You don’t know anything about AI beyond LLMs. No wonder you have this view. Well, I can see why you have your views now. To know about it, you’d have to use it properly… and since you won’t use it properly you can’t ever know about it. 🤷♂️
lol @ “western” that explains a lot.
Oh for fucks sake, "western" yes because the Western socio-political sphere is no longer a geographical divide, it's ideological and financial.
My views are shaped by the technology, the economic ramifications, the environmental impacts, and the geopolitical environment.
If you had a compelling use case for the technology then you would have explained it, instead you present arguments from incredulity and strawmanning, ergo your argumentative capabilities are on a level with a flat earther or young earth creationist.
I have been using technologies that have been getting described as "AI" since fuzzy logic and self modifying code were the new hotness. Every decade there is a new push towards this sort of stuff, every decade we are willing to broaden the definition of intelligence to be more loosely defined. The difference is that this decade a bunch of rich people realised they can release slop and a bunch of credulous idiots will run around declaring it as the first horseman of the coming singularity.
I was going to say a few more things but I have already wasted FAR more time on you than is warranted.
Bye 👋
Like… what technologies. Show me your work. Show me how you use AI with modern tools and still fail.
I apologise, I engaged with you in good faith and you proved to be disingenuous and dishonest, I shouldn't have made you feel special by continuing to respond to you.
I asked you to demonstrate a claim and you turned it into this big production and you still haven't provided any reciepts.
If you want to talk with the adults in future you need to engage in good faith and back up your claims when asked. Then you get to ask follow up questions.
So when I said "Bye 👋" that was my polite way of saying "I'm done with you." Sorry that went over your head, let's see if I can draw a ban from this instance by removing the polite and speaking to you in a way I am certain you are more accustomed to:
**Fuck off, you corporate boot licking waste of skin.
Go make another TikTok video about how an atmosphere can't exist next to a vacuum you mouth breathing cretin.
I am done with you! **
Is it diagnosed? 🤣
Why did you waste time posting this when you could have just not?
I can take a stab at answering this one, there is no pathway from here to there and org knows it. So bland aspirational statements are the order of the day, but when called out on them it's turtle mode. Different platform but I have had similar conversations with conservatives that want to decry things as woke. I somewhat enjoy throwing down the gauntlet and seeing if it gets picked up and I have started doing it more often. I am deadly serious when I say I can and would be swayed by a good argument supported by data, I just know it's not going to be forthcoming from someone spouting broad spectrum inanities about the "Future of AI"™
Why did you waste time jumping into a conversation you aren’t part of instead of just not?
Because I'm the kind of fucked up weirdo that enjoys arguing with people on the internet. What's your excuse?
I’m the kind of fucked up weirdo who enjoys arguing with people on the internet.
Wanna make out?
Nah, I only make out with people that put in the effort to argue in good faith, or at least make amusing claims and then try to articulate a coherent logic to justify them (E.g., Italy isn't real, it was made up by two Giuseppes who got the idea in prison)
But… you’re not arguing in good faith.
I did, but only until you gave up and started phoning it in.
Could you try rephrasing that in a way that makes sense?
You understand it.
No, I'm afraid I don't.
The beginning of the development of "AI" is temporal, not spatial, unless you are referring to the path of development which, for no obvious reason, you refuse to trace backwards as well as forwards.
︋︆︆︅︌︈︄︂︆︄︃︃︈︄︄︊︎︃︆︀︆︌︉︌︈︍︋︈︇︊︁︄︆Y︄︄︀︇︈︁︀︈︅︍︂︂︄︉︎︊︌︌︀︂︋︃о︆︆︄︍︄︀︇︈︎︇︆︁︍︉︍︌︎︌︅︈︋︁︅︆u︄︃︅︎︎︅︁︋︃︆︈︃︈︄︋︇︅︃︎︂︎︄︊︆︂︇︋’︇︄︀︃︂︊︁︉︅︁︃︁︎︀︇︁︁︇︅︂︂︊︋︇︄︁l︁︍︄︋︈︌︄︌︅︋︉︊︍︍︃︉︈︇︇︎︈︉︁︍︈︋︉l︌︀︄︊︊︅︈︈︍︉︊︋︅︁︉︋︉︅︋︉︇︎︋︄︆︌︄︁︈ ︈︃︋︈︌︀︈︎︀︂︉︄︅︊︋︈︈︀︈︆︇︎︊︁g︍︇︀︀︎︂︍︀︂︋︀︉︉︃︆︊︄︌︉︈︈︎︎︈︍︉︃︂︊︂︁︃︃︈︎︋е︁︂︆︁︃︆︄︍︃︄︅︉︎︍︇︈︌︄︅︄t︃︇︈︁︈︋︆︄︈︅︁︊︀︄︄︌︃︈︄︇︍︁ ︌︌︁︂︁︂︈︍︄︅︀︊︍︁︊︎︉︎︊︂︆︎︋︄︂︋︂︂︈︃i︁︊︃︁︌︇︇︊︉︈︋︅︀︂︅︁︌︄︉︊︎︅︊︀︆︂︋︆︍︅︆︋︆︂︃︈︌︂︋t︌︅︉︍︅︋︆︊︃︋︆︂︎︅︎︍︄︋︆︎︋︀︆ ︀︉︍︍︆︃︈︋︀︋︍︂︈︁︀︂︄︌︁︉︍︄︊е︎︌︂︆︊︊︌︍︄︈︄︉︄︌︎︌︅︋︀︆︄︉︃︁︇︌︊v︇︀︍︆︁︁︌︆︇︌︊︃︆︍︇︉︈︁︋︈︁︂︁︊︁︁︎︆︎︎︉︆е︌︄︉︈︄︌︉︈︀︃︆︎︈︉︀︎︍︌︁︄︄︅︁︌︋︇︊︃︇︋︃︉︉n︌︇︆︇︉︋︉︄︄︌︎︁︃︅︁︆︋︉︁︅︀︉︎︎︇︋︌︉t︄︈︅︎︋︊︋︋︊︉︄︍︂︅︌︊︆︅︁︅︋︇︃︍u︀︌︈︌︉︃︋︇︈︇︊︀︎︈︈︇︍︊︃︄︀︉︍︅︍а︀︁︄︁︌︍︅︉︅︁︇︃︍︉︀︂︋︍︌︆︍︎︌︀︀︇︉︆︉︇l︉︌︀︋︇︄︅︅︈︊︌︍︊︍︀︉︎︃︎︁︃︌︇l︆︈︍︎︌︁︂︃︂︄︈︍︀︎︊︀︀︉︉︄︂︍︃︋у︄︅︈︌︀︅︅︀︁︍︎︋︁︋︌︋︄︅︅︅︉︈︍︄︈︎︃︂︂︌︇︅︉︌︀︀
If I'm not getting it immediately then you're communicating your point ineffectively.
What, precisely, do you mean when you assert that the last three to six generations of work on "AI" don't count?
I’m not here to talk in kindergarten sentences.
And yet, you just posted one.
And hey it deserves some praise for operating at a level far above its native ability. Let's be honest, Org is something of a dichotomy, both a rambling idiot, and an example of the highest capabilities of an AI chud. Its parents must be so proud.