547
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 17 Jun 2024
547 points (98.4% liked)
Technology
59583 readers
2499 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
I was working at a company at one point that got a contract to build something I viewed equivalent to malware. Immediately I brought it up to several higher-ups that this was not something I was willing to do. One of them brought up the argument "If we don't do it someone else will."
This mentality scares the shit out of me, but it explains a lot of horrible things in the industry.
Believing in that mentality is worse than the reality of the situation. At least if you say no there's a chance it doesn't happen or it gets passed to someone worse than you. If you say yes then not only are you complicit, you are actively enforcing that gloomy mentality for other engineers. Just say no.
It‘s exactly this dangerous mindset that‘s riding us in some AI service hellhole. Too many super talented developers have told themselves exactly that instead of standing up for their principles or even allowing themselves to have principles in the first place.
Only recently have they started leaving companies like OpenAI and taking a stance because they‘re actually seeing what their creation is used for and with how little care for human life it‘s been handled.
Of course many critics knew this was headed towards military contracts and complete Enshittification. It was plain to see OpenAI founders aren‘t the good guys but „someone else would do it anyway“ kept the underlings happy. This deterministic fallacy is also why anyone still works for Meta or Google. It‘s a really lazy excuse.
Is this true though? From my understanding, Altman was able to overrule the board largely because the employees (especially the one who had been with the company for more than 1-2 years) were worried about their stock options.
I wouldn't be surprised if the vast majority of the OpenAI team are ghouls just like Altman, that fundamentally lack humanity (incapable of honesty, inability to tell right from wrong, incapable of empathy).
Don't get me wrong, I don't mean this in the Hollywood sense, like the evil antagonists in say star wars, I am sure they come off as "normal" during a casual conversation. I am referring to going deeper and asking subtle questions referring to matters of ethics and self-enrichment in an off the record environment. They will always come up with some excuse to justify their greed as being "for the betterment of humanity" or some other comical word salad.
You know I‘m worried you might be exactly on point with this assumption. I still give some of them the benefit of doubt because humans can „reason“ themselves into pretty dangerous things by appealing to authority and the like. Doesn‘t make all of them evil but sure as hell way too gullible for the field they‘re working in.
Happy cake day!
One thing I like to tell people with that attitude is: whatever someone does, there’s always someone else who will see it as an example and challenge to do it “better”. Do you want to be the company that started that chain? Are you prepared to compete in that race?
For something borderline malware, someone will take your lead and make it “better malware”. If you are not prepared to respond in kind, then why did you even go there? If you’re not ready to be known as the top of the line malware creator, why start the product line?
It is unfortunately one of the darker aspects of the hyper-growth-focused tech and engineering is the often highly mercenary/transactional nature of many people in the field. Like, there’s a reason Facebook pays engineers 250-400k or more. Sure, the work can be difficult, but most of the time it’s not that difficult. They’re paying people that much so that they ignore their morals, shut the fuck up, and just take the paycheck and do the work that is helping to destroy society.
It’s immensely distressing to me as a software engineer. I am fully aware that my morality is limiting my earning potential, and that makes me kinda furious - not so much at myself, but that our economic system is set up in such a way that that’s not only possible, but optimal (in terms of earning a nice paycheck and being able to retire somewhat early).