108
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 30 Dec 2025
108 points (99.1% liked)
Programming
24146 readers
759 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
Many companies are tying monitored AI usage to performance evaluations. So yeah, we’re going to be ”willing” under that pretext.
Finding one senior dev who is willing to debug a multi-thousand line application produced by genAI…they’re going to be reluctant, at best, because the code is slop.
MBAs and C-suites keep trying to manufacture consent for this tech so their stock portfolios outperform, and the madmen are willing to sacrifice your jobs to do it!
The just sounds like good old fashioned mismanagement. Any examples of successful companies that are doing this?
Salesforce definitely does it. Coinbase also recently fired a bunch of devs for not using it (https://techcrunch.com/2025/08/22/coinbase-ceo-explains-why-he-fired-engineers-who-didnt-try-ai-immediately/)
Salesforce also recently admitted they were too hasty when they tried to replace humans with ais: https://www.investmentwatchblog.com/salesforce-now-admits-its-ai-agents-were-unreliable-after-cutting-4000-jobs/
A reason to look somewhere else. Not out of protest but because their evaluations process (and likely more) is fucked. You should do that all 2 - 3 years anyway.