2
top 2 comments
sorted by: hot top controversial new old
[-] codexarcanum@lemmy.dbzer0.com 1 points 8 hours ago

I'm with the commenters, people who make claims like this need to be treated as criminals for the fraud they're committing. LLMs can't even produce 70% of the working code in a single file, and even then it can only be about 300 lines long at most before it runs out of token memory.

The only code AI is writing is code that's already been written, which means OpenAI's number one competitor is SquareSpace. Because, sure, AI can definitely crank out another dropshipping, e-commerce, SEO hyper-optimized, ad-flooded, scam site in no time at all.

In my day to day work with legacy code I regularly run across 10K+ LOC files that AI can't even parse, let alone split it up into smaller working fragments, remove dead code, or even find and fix bugs.

We have static analysis tools that can even do some of that better than LLMs.

Likewise, having managed eams of intelligent people in developing complex networked systems, often the hardest problem a team faces is not hammering out shit loads of awful code (unless you work at MS, Google, or Meta) but rather its figuring out what the correct question to ask was. Sometimes the solution is even simple and obvious, once you understand what the problem actually is.

Now, when it comes time to explain to the C-suite the problem after its been discovered, maybe AI will be helpful there? After all, it seems the main thing LLMs are good at is convincing morons in suits to spend money on problems they don't understand.

[-] Vent@lemm.ee 1 points 10 hours ago

80/20 rule definitely applies here, which makes the human 30% equate to ~82.5% of the work. Though, I'd argue AI is even less useful than that.

this post was submitted on 14 Mar 2025
2 points (100.0% liked)

Hacker News

905 readers
334 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 6 months ago
MODERATORS