188
submitted 2 weeks ago by misk@sopuli.xyz to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] adarza@lemmy.ca 203 points 2 weeks ago

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[-] floofloof@lemmy.ca 54 points 2 weeks ago

The same way these capitalists evaluate human beings.

[-] LostXOR@fedia.io 20 points 2 weeks ago

Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

load more comments (11 replies)
[-] drmoose@lemmy.world 11 points 2 weeks ago* (last edited 2 weeks ago)

The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.

[-] echodot@feddit.uk 10 points 2 weeks ago

Right but that's not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you've developed something that you might not have developed.

[-] drmoose@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn't have anything to do with "AGI". For context, Apple's net income is 90 Billion this year.

I've listened to enough interviews to know that all of AI leaders want this holy grail title of "inventor of AGI" more than anything else so I don't think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

[-] NotSteve_@lemmy.ca 3 points 2 weeks ago

That's an Onion level of capitalism

load more comments (1 replies)
[-] Mikina@programming.dev 81 points 2 weeks ago

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[-] SlopppyEngineer@lemmy.world 18 points 2 weeks ago

There are already a few papers about diminishing returns in LLM.

[-] feedum_sneedson@lemmy.world 9 points 2 weeks ago

I just tried Google Gemini and it would not stop making shit up, it was really disappointing.

[-] bitjunkie@lemmy.world 3 points 2 weeks ago

I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought

[-] finitebanjo@lemmy.world 2 points 2 weeks ago

The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

load more comments (5 replies)
load more comments (31 replies)
[-] frezik@midwest.social 39 points 2 weeks ago

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

Cars, and snake oil, and propaganda

load more comments (2 replies)
[-] ChowJeeBai@lemmy.world 33 points 2 weeks ago

This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.

Except that it isn't AGI.

[-] phoneymouse@lemmy.world 14 points 2 weeks ago* (last edited 2 weeks ago)

But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

The real motivation is to not be beholden to Microsoft

[-] ArbitraryValue@sh.itjust.works 25 points 2 weeks ago* (last edited 2 weeks ago)

That's not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It's reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.

[-] brie@programming.dev 15 points 2 weeks ago

Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?

So it'd take a 100M users to sign up for the $200/mo plan. All it'd take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.

[-] ArbitraryValue@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago)

I suppose that by that point, the AI will be running Microsoft rather than simply being a Microsoft product.

[-] echodot@feddit.uk 4 points 2 weeks ago

Maybe it'll be able to come up with coherent naming conventions for their products. That would be revolutionary

[-] kautau@lemmy.world 3 points 2 weeks ago

That’s basically Neuromancer, and at this point it seems that big tech companies are reading dystopian cyberpunk literature as next-gen business advice books, so you’re certainly right

[-] echodot@feddit.uk 2 points 2 weeks ago

If they actually achieve AGI I don't understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don't otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?

It's like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there's no point making any kind of plans based on today systems.

[-] ArbitraryValue@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago)

I see what you're saying and I agree that if, for example, we get an AI god then money won't be useful. However, that's not the only possible near-future outcome and if the world as we know it doesn't end then money can be used by AIs to get other AIs to do something they don't otherwise want to do.

[-] echodot@feddit.uk 2 points 2 weeks ago* (last edited 2 weeks ago)

My point is if AI takes over all of the work there won't be any jobs for humans. So they won't have any money.

So who are all the AI companies going to sell their products to? The whole system doesn't work in an AI future and we don't need AI gods to be able to do our jobs, after all most humans are idiots.

Also AI doesn't need motivation.

load more comments (1 replies)
[-] echodot@feddit.uk 15 points 2 weeks ago

So they don't actually have a definition of a AGI they just have a point at which they're going to announce it regardless of if it actually is AGI or not.

Great.

[-] hendrik@palaver.p3x.de 13 points 2 weeks ago* (last edited 2 weeks ago)

Why does OpenAI "have" everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there... They have a definition of AGI... Yet, they release none of that...

Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company's value, and you'd better not tell the truth. But with all the other things, it's just silly not to share anything.

Either they're even more greedy than the Metas and Googles out there, or all the articles and "leaks" are just unsubstantiated hype.

[-] Tattorack@lemmy.world 15 points 2 weeks ago

Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

[-] mint_tamas@lemmy.world 13 points 2 weeks ago

Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

[-] phoenixz@lemmy.ca 3 points 2 weeks ago

They don't have AGI. AGI also won't happen for another laege amount of years to come

What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That's it.

AGI is a completely different beast to the current LLM flower leaves

load more comments (1 replies)
[-] j4k3@lemmy.world 12 points 2 weeks ago

Does anyone have a real link to the non-stalkerware version of:

https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition

-and the only place with the reference this article claims to cite but doesn't quote?

load more comments
view more: next ›
this post was submitted on 27 Dec 2024
188 points (94.8% liked)

Technology

60346 readers
971 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS