214
submitted 1 month ago* (last edited 1 month ago) by AutistoMephisto@lemmy.world to c/technology@lemmy.world

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

top 50 comments
sorted by: hot top controversial new old
[-] pdxfed@lemmy.world 33 points 1 month ago

Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don't care about things a year away let alone 10.

I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during "downsizing" who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

Hope leaders can be a bit braver and wiser this go 'round so we don't get to a cliffs edge in software.

load more comments (4 replies)
[-] edgemaster72@lemmy.world 32 points 1 month ago

Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

And all they'll hear is "not failure, metrics great, ship faster, productive" and go against your advice because who cares about three months later, that's next quarter, line must go up now. I also found this bit funny:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me... I was proud of what I’d created.

Well you didn't create it, you said so yourself, not sure why you'd be proud, it's almost like the conclusion should've been blindingly obvious right there.

[-] AutistoMephisto@lemmy.world 17 points 1 month ago

The top comment on the article points that out.

It's an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It's a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I'll have to find it but there's a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don't know how to do the stuff that modern warplanes do automatically.

[-] logicbomb@lemmy.world 10 points 1 month ago

It's more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you're doomed. You might as well throw away the entire code base and start over.

And if you want an exact parallel, I've said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

load more comments (2 replies)
[-] drosophila@lemmy.blahaj.zone 7 points 1 month ago* (last edited 1 month ago)

The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven't lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won't make you forget how to write like using ChatGPT will.

I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren't good at using computers generally don't do this, and might not even know how you would start trying to.

For years 'user friendly' software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user's brain into the computer and hide the computer's internal state (so that its not implied that the user has to understand it, so that a user that doesn't know what they're doing won't do something they'll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every "smart" feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

Now, I am of the opinion that the 'mirroring the internal state' method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn't be accessible to people with different levels of ability. But just as a random person in a store shouldn't grab a wheelchair user's chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to 'user friendliness'. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.

[-] ctrl_alt_esc@lemmy.ml 4 points 1 month ago

I agree with you, though proponents will tell you that's by design. Supposedly, it's like with high-level languages. You don't need to know the actual instructions in assembly anymore to write a program with them. I think the difference is that high-level language instructions are still (mostly) deterministic, while an LLM prompt certaily isn't.

load more comments (1 replies)
load more comments (1 replies)
load more comments (7 replies)
[-] Unlearned9545@lemmy.world 23 points 1 month ago

Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don't have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

[-] ignirtoq@feddit.online 20 points 1 month ago

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

Except we are talking about that, and the tech bro response is "in 10 years we'll have AGI and it will do all these things all the time permanently." In their roadmap, there won't be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

What's most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

"Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."

[-] grue@lemmy.world 12 points 1 month ago

That's why they're all-in on authoritarianism.

[-] HasturInYellow@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

According to a study, the ~~lower~~ top 10% accounts for something like 68% of cash flow in the economy. Us plebs are being cut out all together.

That being said, I think if people can't afford to eat, things might bet bad. We will probably end up a kept population in these ghouls fever dreams.

Edit: I'm an idiot.

load more comments (2 replies)
[-] UnspecificGravity@piefed.social 5 points 1 month ago

Yep, and now you know why all the tech companies suddenly became VERY politically active. This future isn't compatible with democracy. Once these companies no longer provide employment their benefit to society becomes a big fat question mark.

[-] deathbird@mander.xyz 19 points 1 month ago

I think this kinda points to why AI is pretty decent for short videos, photos, and texts. It produces outputs that one applies meaning to, and humans are meaning making animals. A computer can't overlook or rationalize a coding error the same way.

[-] vpol@feddit.uk 8 points 1 month ago

The developers can’t debug code they didn’t write.

This is a bit of a stretch.

[-] _g_be@lemmy.world 10 points 1 month ago

Vibe coders can't debug code because they didn't write

[-] embed_me@programming.dev 5 points 1 month ago

Vibe coders can't debug code because they can't write code

[-] Xyphius@lemmy.ca 9 points 1 month ago

agreed. 50% of my job is debugging code I didn't write.

[-] funkless_eck@sh.itjust.works 7 points 1 month ago

I mean I was trying to solve a problem t'other day (hobbyist) - it told me to create a

function foo(bar): await object.foo(bar)

then in object

function foo(bar): _foo(bar)

function _foo(bar): original_object.foo(bar)

like literally passing a variable between three wrapper functions in two objects that did nothing except pass the variable back to the original function in an infinite loop

add some layers and complexity and it'd be very easy to get lost

[-] theparadox@lemmy.world 8 points 1 month ago

The few times I've used LLMs for coding help, usually because I'm curious if they've gotten better, they let me down. Last time it was insistent that its solution would work as expected. When I gave it an example that wouldn't work, it even broke down each step of the function giving me the value of its variables at each step to demonstrate that it worked... but at the step where it had fucked up, it swapped the value in the variable to one that would make the final answer correct. It made me wonder how much water and energy it cost me to be gaslit into a bad solution.

How do people vibe code with this shit?

load more comments (1 replies)
load more comments (2 replies)
[-] Nalivai@lemmy.world 8 points 1 month ago

They never actually say what "product" do they make, it's always "shipped product" like they're fucking amazon warehouse. I suspect because it's some trivial webpage that takes an afternoon for a student to ship up, that they spent three days arguing with an autocomplete to shit out.

load more comments (1 replies)
[-] phed@lemmy.ml 8 points 1 month ago

I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn't remember things from 3 messages ago when it should. You have to keep re-explaining the goal to it. It's wholey incompetant. And yea when you have it do stuff you aren't familiar with or don't create, def. I have it write a commentary, or I take the time out right then to ask it what x or y does then I add a comment.

load more comments (4 replies)
[-] Suffa@lemmy.wtf 7 points 1 month ago

AI is really great for small apps. I've saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.

But anything big and it's fucking stupid, it cannot track large projects at all.

load more comments (23 replies)
[-] HugeNerd@lemmy.ca 6 points 1 month ago

Computers are too powerful and too cheap. Bring back COBOL, painfully expensive CPU time, and some sort of basic knowledge of what's actually going on.

Pain for everyone!

load more comments (4 replies)
[-] CarbonatedPastaSauce@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

Something any (real, trained, educated) developer who has even touched AI in their career could have told you. Without a 3 month study.

load more comments (15 replies)
[-] raspberriesareyummy@lemmy.world 5 points 1 month ago

So there's actual developers who could tell you from the start that LLMs are useless for coding, and then there's this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling... And for wasting energy and water.

[-] psycotica0@lemmy.ca 40 points 1 month ago

I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.

So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.

Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.

[-] bassomitron@lemmy.world 22 points 1 month ago

100% this. The guy was literally a consultant and a developer. It'd just be bad business for him to outright dismiss AI without having actual hands on experience with said product. Clients want that type of experience and knowledge when paying a business to give them advice and develop a product for them.

load more comments (3 replies)
[-] 5too@lemmy.world 22 points 1 month ago

And not only did he see for himself, he wrote up and published his results.

load more comments (2 replies)
[-] khepri@lemmy.world 9 points 1 month ago

They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it's 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don't have a time-saving solution for those common tasks already in place? As with anything LLM, it's decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

[-] lambdabeta@lemmy.ca 12 points 1 month ago

Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.

Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.

[-] raspberriesareyummy@lemmy.world 5 points 1 month ago

As you said, "boilerplate" code can be script generated - and there are IDEs that already do this, but in a deterministic way, so that you don't have to proof-read every single line to avoid catastrophic security or crash flaws.

[-] InvalidName2@lemmy.zip 8 points 1 month ago

And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that's not already covered by your framework, etc. That it's not able to completely write 100% of your codebase perfectly from the get-go does not mean it's entirely useless.

[-] Soggy@lemmy.world 13 points 1 month ago

Other than that it's work that junior coders could be doing, to develop the next generation of actual good developers.

[-] SreudianFlip@sh.itjust.works 8 points 1 month ago* (last edited 1 month ago)

Yes, and that's exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

If you have no junior developers, who will turn into senior developers later on?

[-] pinball_wizard@lemmy.zip 4 points 1 month ago

If you have no junior developers, who will turn into senior developers later on?

At least it isn't my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j... I can just keep enjoying today's version of the Internet, unchanged.

load more comments (2 replies)
load more comments (5 replies)
load more comments (9 replies)
[-] rimu@piefed.social 5 points 1 month ago* (last edited 1 month ago)

FYI this article is written with a LLM.

image

Don't believe a story just because it confirms your view!

[-] LiveLM@lemmy.zip 7 points 1 month ago

Aren't these LLM detectors super inaccurate?

load more comments (4 replies)
[-] AmbiguousProps@lemmy.today 6 points 1 month ago

I've heard that these tools aren't 100% accurate, but your last point is valid.

load more comments (8 replies)
load more comments (2 replies)
[-] SocialMediaRefugee@lemmy.world 5 points 1 month ago

Just sell it to AI customers for AI cash.

Vibe profits.

load more comments (1 replies)
[-] Rhoeri@lemmy.world 5 points 1 month ago

AI is hot garbage and anyone using it is a skillless hack. This will never not be true.

[-] nullroot@lemmy.world 13 points 1 month ago

Wait so I should just be manually folding all these proteins?

load more comments (6 replies)
[-] jbloggs777@discuss.tchncs.de 6 points 1 month ago

While this is a popular sentiment, it is not true, nor will it ever be true.

AI (LLMs & agents in the coding context, in this case) can serve as both a tool and a crutch. Those who learn to master the tools will gain benefit from them, without detracting from their own skill. Those who use them as a crutch will lose (or never gain) their own skills.

Some skills will in turn become irrelevent in day-to-day life (as is always the case with new tech), and we will adapt in turn.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 07 Dec 2025
214 points (98.2% liked)

Technology

79228 readers
216 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS