253
submitted 11 months ago by misk@sopuli.xyz to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] NounsAndWords@lemmy.world 338 points 11 months ago

So a Board member wrote a paper about focusing on safety above profit in AI development. Sam Altman did not take kindly to this concept and started pushing to fire her (to which end he may or may not have lied to other Board members to split them up). Sam gets fired for trying to fire someone for putting safety over profit. Everything exploded and now profit is firmly at the head of the table.

I like nothing about this version of events either.

[-] GregorGizeh@lemmy.zip 106 points 11 months ago* (last edited 11 months ago)

Wasn’t that evident from the very first few days, when we learned the board stood for the non profit, safety first mother org while the booted ceo stands for reckless monetization?

Now he’s back, the safety concerns got silenced, money can be made, people can get fucked. A good day for capitalists

[-] jeena@jemmy.jeena.net 71 points 11 months ago

That's why I was so confused that all the workers stood behind the CEO and threatened to go to Microsoft.

[-] ours@lemmy.world 62 points 11 months ago

My guess is that they want the company to grow fast so that their salaries and stock options grow as well.

[-] dustyData@lemmy.world 45 points 11 months ago* (last edited 11 months ago)

That's what a personality cult gets you. The amount of idiots willing to die for another man's ego is why we have some of the shittiest things in society. “Daddy told me so” is a powerful force when the people who believe it cannot see that their vision has absolutely no rational support. Jobs, Musk, Gates, Trump, they all thrive by telling people that their irrational beliefs are true and if they follow them they will make their dreams realities. The talk and narrative around Altman has always struck me similar to Musk's cult of personality in the late 2010s.

[-] APassenger@lemmy.world 15 points 11 months ago* (last edited 11 months ago)

Stock options help. If they make enough off of OpenAI, they won't need to find a job after this.

[-] dustyData@lemmy.world 9 points 11 months ago

This is tech, they have no protections. I bet there's some clause with a time lock that they can only sell the stock in 10 years time and they lose them if they leave OpenAI before that time window for any reason. In 5 years or before they'll get hit by some mass layoffs and lose everything. This has happened so many times before with so many companies that it is laughable. Stock options in tech are a fairy tale.

[-] DragonTypeWyvern@literature.cafe 6 points 11 months ago

Especially in a company that's a non-profit, lmao.

Sheep gonna sheep.

[-] FrostyTrichs@lemmy.world 11 points 11 months ago* (last edited 11 months ago)

The amount of idiots willing to die for another man's ego

U.S. Military has entered the chat

[-] TimeSquirrel@kbin.social 6 points 11 months ago* (last edited 11 months ago)

I'm not sure Gates ever had a "personality cult". In the 90s during his heyday he was pretty much reviled even by Windows users. He built his empire by swallowing everyone else around him that was doing anything even a little bit innovative. He wasn't really the "visionary artist/engineer" type like those others. Just a random rich nerd who won the technology monopoly game.

[-] raspberriesareyummy@lemmy.world 15 points 11 months ago

Like @Zak, I would like to point out that - as much as I despised Bill Gates back then - he was actually competent. And - despite me never liking Microsoft - they have a legitimate business model built on selling products, not user data (like all social media and google). So of all the evil dipshits out there, Microsoft and Apple are the lesser ones. (I am a Linux user since 2004 or so)

[-] Zak@lemmy.world 10 points 11 months ago

Early accounts are that Bill Gates was absolutely a talented coder, at least in the 1970s. Of course that was't what made him rich - a series of business decisions that were some combination of lucky and prescient were.

[-] trafalgar225@lemmy.world 15 points 11 months ago

The company gave the companies a large amount of equity. That was the work of Sam Altman. The employees are voting their wallet my sticking up for him.

[-] NounsAndWords@lemmy.world 10 points 11 months ago* (last edited 11 months ago)

That was some classic business pressure tactics. The sort of thing a massive multinational corporation would have a lot of experience in. The sort of thing a massive multinational corporation suddenly blindsided by this with a lot of financial interest in the situation would be interested in doing....while at the same time mitigating risk by trying to pull those same employees into the parent company if things don't go their way.

Edit: Now that I think about it, they also managed to get the vast majority of employees to 'join together' on the issue making it (psychologically) easier for them to 'join together' in choosing where to jump ship to. Maybe I'm just paranoid, but it's just a really clever move on Microsoft's part.

[-] DragonTypeWyvern@literature.cafe 1 points 11 months ago

Using a playbook isn't clever, writing it was.

[-] SkyeStarfall@lemmy.blahaj.zone 59 points 11 months ago

I feel like this isn't surprising knowing about all the other stuff altman has done. Seems like yet another loss for the greater good in the name of profit.

[-] dependencyinjection@discuss.tchncs.de 14 points 11 months ago

What other stuff has he done? Genuinely curious.

[-] NounsAndWords@lemmy.world 13 points 11 months ago

Now what would the company do if the AI model started putting safety above profit (i.e. refusing to lie to profit the user (aka reducing market value))? How fucked are we if they create an AGI that puts profit above safety?

[-] HopeOfTheGunblade@kbin.social 4 points 11 months ago

Entirely. We all die. The light cone is turned into the maximum amount of "profit" possible.

This is still better than a torment maximizer, which may come as some comfort to the tiny dollar bills made of the atoms that used to be you.

[-] ipkpjersi@lemmy.ml 1 points 11 months ago

So basically it's exactly what I expected and I'm not surprised in the slightest. Amazing how that works.

It's not too surprising considering they don't even have basic essential security features in 2023 like two-factor authentication. Absolutely pitiful.

[-] seiryth@lemmy.world 119 points 11 months ago

The thing that shits me about this is google appear to the public to be late to the party but the reality is they DID put safety before profit when it came to AI. The sheer amount of research and papers put out by them on AI should have proven to people they know what they're doing.

And then openAI throw caution into the wind and essentially make google and others panic knee jerk because there's really money to be made, and now everyone seems to be throwing caution into the wind and pushing it into the mainstream before society is ready.

All in the name of shareholders.

[-] blazeknave@lemmy.world 11 points 11 months ago

10k%! A friend works in brand marketing at Google. They'd been using internally for months before market pressure forced them to start onboarding public end users. I've been in the earliest of the external betas (bc I give a lot of product feedback over the years?) and from the beginning the user experiences have been the most locked down of all the consumer LLMs

load more comments (28 replies)
[-] danielfgom@lemmy.world 38 points 11 months ago

Yet another jackass CEO. Anybody surprised?

[-] autotldr@lemmings.world 17 points 11 months ago

This is the best summary I could come up with:


Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman's negative attention by co-writing a paper on different ways AI companies can "signal" their commitment to safety through "costly" words and actions.

In the paper, Toner contrasts OpenAI's public launch of ChatGPT last year with Anthropic's "deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype."

She also wrote that, "by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur."

At the same time, Duhigg's piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman "accountable" in order to fulfill its mission to "make sure AI benefits all of humanity," as one unnamed source put it.

"It's hard to say if the board members were more terrified of sentient computers or of Altman going rogue," Duhigg writes.

The piece also offers a behind-the-scenes view into Microsoft's three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board's moves "mind-bogglingly stupid."


The original article contains 414 words, the summary contains 215 words. Saved 48%. I'm a bot and I'm open source!

load more comments
view more: next ›
this post was submitted on 06 Dec 2023
253 points (95.7% liked)

Technology

59670 readers
2456 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS