this post was submitted on 26 Jun 2025
86 points (95.7% liked)
technology
23859 readers
168 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
that's pretty wild as a difference
Generative AI is not an inherently evil technology. If I had any trust in Western institutions whatsoever I wouldn't have as much of an issue with it.
I read both this one and RotJD and boy howdy was that a wild ride. It seems like he veers off course with the occasional wild claim like:
But maybe I'm still too skeptical. I'm thinking about the news that Louisiana is building a bunch of new natural gas plants to serve Meta's new data center. Regardless of how well "AI" functions at solving actual problems (instead of just coming up with ever more elaborate ways to serve ads) or when it plateaus, it seems like we're lashed to the mast.
Compute power doubling every 100 days certainly means the text extruder will manifest intelligence, right? The deterministic statistical matrix will definitely come alive and help us tangibly improve stakeholder value, right?
The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There's also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don't know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it's that it doesn't take as much as you'd think to fool a large number of people into believing that we've reached AGI and the implications of that are a little scary.
So I haven't read anything by Steve Yegge before, but looking into it now I see he's the head of a company that sells tooling that leverages the very models/agents that he's saying will turn the industry on its head. Not saying he's wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.
It's a bunch of people selling shovels trying to convince everyone else there's a gold rush
Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there's the potential a lot of damage could get done.
Or if AI does in fact do X, it'll just punch the accelerator on every negative trend in tech.
Although I’m always wrong about everything, I’m still open to Ed Zitron’s “something big”, and the bottom falls out of it (although I’m sure it’s too big to fail by now)
I'm pretty sure this was the plot of a Pinky and the Brain episode.