“Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”
Yep.
Management is often out of touch and full of shit
“Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”
Yep.
Management is often out of touch and full of shit
You wanna know who really bags on LLMs? Actual AI developers. I work with some, and you've never heard someone shit all over this garbage like someone who works with neural networks for a living.
That's me, but for QA...
our company renamed out ML team into AI team, just to please investors, they been around for over a decade and never touched an LLM
There's this great rage blog post from 1.5 years ago by a data scientist
Management: "No, that doesn't work, because employees spend so much time doing the actual work that they lack the vision to know what's good for them. Luckily for them I am not distracted by actual work so I have the vision to save them by making them use AI."
If the tool were good, we’d all just use it.”
Eggs-mothafucking-zackly!!!
There are no daily pressure campaigns to convince you to use a laptop or a smartphone. The value of those are self-evident.
AI on the other hand... -_-
At work today we had a little presentation about Claude Cowork. And I learned someone used it to write a C (maybe C++?) compiler in Rust in two weeks at a cost of $20k and it passed 99% of whatever hell test suite they use for evaluating compilers. And I had a few thoughts.
I think this is a cool thing in the abstract. But in reality, they cherry picked the best possible use case in the world and anyone expecting their custom project is going to go like this will be lighting huge piles of money on fire.
Agree with all points. Additionally, compilers are also incredibly well specified via ISO standards etc, and have multiple open source codebases available, eg GCC which is available in multiple builds and implementations for different versions of C and C++, and DQNEO/cc.go.
So there are many fully-functional and complete sources that Claude Cowork would have pulled routines and code from.
The vibe coded compiler is likely unmaintainable, so it can't be updated when the spec changes even assuming it did work and was real. So you'd have to redo the entire thing. It's silly.
Updates? You just vibecode a new compiler that follows the new spec
"I want to add a command line option that auto generates helloworld.exe"
"That'll be $21,000."
Ah, that's the problem, we've been getting all these chatbots to generate "hellworld.exe".
I wanna make sure I got this right. They used $20,000 in fees in 2 weeks to make a compiler? Also, to what end? Like what's the expected ROI on that?
Well it's Anthropic, creators of Claude. It's a way to show off and convince people AI can do it. $20k is what it would cost you or me, but it's just free for them.
I don't even hate AI but it's kinda sickening the way they overstate the capabilities. But let me tell you how excited the top leadership at my company is about this...
Man, corporate layoffs kill productivity completely for me.
Once you do layoffs >50% of the job becomes performative bullshit to show you’re worth keeping, instead of building things the company actually needs to function and compete.
And the layoffs are random with a side helping of execs saving the people they have face time with.
From reading all the comments from the community, it’s amazing (yet not surprising) that all these managers have fallen for the marketing of all these LLMs. These LLMs have gotten people from all levels of society to just accept the marketing without ever considering the actual results for their use cases. It’s almost like the sycophant nature of all LLMs has completely blinded people from being rational just because it is shiny and it spoke to them in a way no one has in years.
On the surface level, LLMs are cool no doubt, they do have some uses. But past that everyone needs to accept their limitations. LLMs by nature can not operate the same as a human brain. AGI is such a long shot because of this and it’s a scam that LLMs are being marketed as AGI. How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.
I don't think LLMs will become AGI, but... planes don't fly by flapping their wings. We don't necessarily need to know how animal brains work to achieve AGI, and it doesn't necessarily have to work anything like animal brains. It's quite possible if/when AGI is achieved, it will be completely alien.
Aircraft wings operate on pretty much the same principle as bird wings do. We just used a technology we had already developed (fans, essentially) to create the forward movement necessary to create the airflow over the wings for lift. We know how to do it the bird way too, but restrictions in material science at scale make the fan method far easier and less error prone.
From reading all the comments from the community, it’s amazing (yet not surprising) that all these managers have fallen for the marketing of all these LLMs
This is probably related to automation bias and wishful thinking
One of the best written comments I've seen about this. LLMs are cool for what they can do, but anyone comparing them to AGI is just shilling and trying to make a fortune off of selling pickaxes in a gold rush with the only gold being fools gold.
I can’t wait until billionaires realize how worthless they actually are without people doing everything for them
Might be a minute. The brain damage that lets them think they've "earned" those billions kinda hides the work of others. Especially the poors.
Yep they’re definitely mentally ill
We can't to wait for them to realize this themselves. We need to demonstrate this by actively creating a society which excludes them.
They will never realize that, they will blame any failures on others naturally. They truly believe they are better than everyone else, that their superior ability led them to invest in a company that increased in value enough for them to become filthy rich.
Surrounded by yes men and woman that agree with everything they say and tell them what a genius they are. Of course any ill outcome isn't their fault.
"All my successes are thanks to my superior intellect and skill! All my failures are the fault of bad serfs who didn't follow my vision!" - Every billionaire
When you think about it, it's not too different from how some people treat the current crop of AI, so it makes sense that they're so hypnotized by the promises.
Who?
The original creator of Twitter and now creator of Bluesky and whatever this thing that's falling off the rails is.
Basically another billionaire living in his own little bubble and huffing his own farts too much.
He also had a lot to do with Nostr, early on.
Jack Dorsey, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023,[13][15] as well as a $10 million cash donation to a Nostr development collective in 2025.
What?
Oops I mistread my source. Have updated my comment.
he left Bluesky around 2 years ago
That must be why they are doing okay, haha.
Uhhh, Block is the the parent company of Square (formerly known as Square Up). This is actually a huge company, not some little side thing.
Is that thumbnail a scene from 12 monkeys?
Naw. This is clearly just 1 monkey.
Right before he dies, yeah
I had a meeting with my boss today about my AI usage. I said I tried using Claude 4.5, and I was ultimately unimpressed with the results, the code was heavy and inflexible. He assured me Claude 4.6 would solve that problem. I pointed out that I am already writing software faster than the rest of the team can review because we are short staffed. He suggested I use Claude to review my MRs.
One big problem with management is their inability to listen. Folks say shit over and over but management seems deaf because we're not people to be listened to. We're the help. And management acts like they know better.
If you were so smart you'd have wads of cash like them. They got where they are through sheer grit and bootstraps and a paltry $50 million from their family.
What software are you writing? I'm struggling to see what any of this does, and for who. We could set all these AI computers on fire and what would it change? We have water, food, electricity, clothes, homes, cars, etc Oh we have AI Becky videos!
This is a most excellent place for technology news and articles.