534
submitted 1 year ago* (last edited 1 year ago) by ChunkMcHorkle@lemmy.world to c/technology@lemmy.world

Sam Altman has been fired as CEO of OpenAI, the company announced on Friday.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company said in its blog post.

EDITED TO ADD direct link to OpenAI board announcement:
https://openai.com/blog/openai-announces-leadership-transition

you are viewing a single comment's thread
view the rest of the comments
[-] MargotRobbie@lemmy.world 33 points 1 year ago

Not exactly surprised here. Every time I've seen him on the news, it's always him fearmongering about the dangers of generative AI, when ChatGPT is burning through money and seemed to become more and more restrictive with every iteration. You can't run an organization if it is built on top of lies.

Actually open models (not open source, sadly) like specialized LLaMa 2 derivatives that could be ran and fine-tuned locally seems to be the future, because there seems to be a diminishing return in training/inference power to usefulness, and specialized smaller model tuned for specific applications are much more flexible than a giant general one that can only be used on somebody else's machine.

[-] kromem@lemmy.world 25 points 1 year ago

because there seems to be a diminishing return in training/inference power to usefulness

Be careful not to be caught up in the application of Goodhart's Law going on in the field right now.

There's plenty of things GPT-4 trounces everything else on, they just tend to be things outside the now standardized body of tests, which suggests the tests have become the target and are no longer effective measurements.

This is perhaps most apparent in things like Orca, where we directly use the tests as the target, have GPT-4 generate synthetic data that improves Llama performance on the target, and then see large gains in smaller models on the tests.

But those new models don't necessarily have the same capabilities on more abstract capabilities, such as the recent approach of using analogy to solve problems.

We are arguably becoming too myopic in how we are measuring the success of new models.

this post was submitted on 17 Nov 2023
534 points (99.1% liked)

Technology

59708 readers
1536 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS