49
submitted 11 months ago by btp@kbin.social to c/technology@lemmy.ml

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

you are viewing a single comment's thread
view the rest of the comments
[-] eestileib@sh.itjust.works 23 points 11 months ago

Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

Meanwhile these chucklefucks are using the same electricity demand as Belgium to replicate a math solver that could probably be assigned as half-term project in an undergraduate class and are pissing themselves about threatening humanity.

The Valley has lost its goddamn mind.

[-] AngrilyEatingMuffins@kbin.social 11 points 11 months ago* (last edited 11 months ago)

The computer self corrected based on its understanding of math principles that it learned through text. It’s not about the math. It used reason.

The computer had a thought. A rudimentary one, yes. But an actual thought.

I don’t really know what to say if you don’t see why that’s an amazing discovery.

Also the Belgium thing was if it continued growing at the rate it is, but the technology didn’t improve. The technology has already improved by two generations since that paper was written. It’s a crappy talking point and nothing else.

[-] Redex68@lemmy.world 5 points 11 months ago

You are missing the very crucial part about how this is generalised. That's like saying we don't need to teach math to people anymore, we have calculators now. The AI isn't too capable currently, but dismissing it would be like dismissing consumer PCs, because what are people gonna do with computers?

[-] sincle354@kbin.social 3 points 11 months ago

Valley bullshit aside, I do have to defend the expensive exploration of the generalized AI space purely because it's embarassingly parallel. That is, it just gets so much better the more money and resources you throw at it. It couldn't solve math without a few million dollars worth of supercomputer training time. We didn't know it would create valid VHDL-to-csv-to-VBA scripts, but I got phind(.com) to make me one. And I certainly can't tell Wolfram Alpha to package the math solution it generated as a Javascript function.

[-] p03locke@lemmy.dbzer0.com 2 points 11 months ago

Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

Not to mention the huge advances in Chess AI. LeelaChessZero is the open-source implementation of the original AlphaZero idea Google came out with, and is rivaling Stockfish 15. Meanwhile, Torch is a new AI being developed that is now kicking Stockfish's ass.

Grandmasters and notices alike are learning a lot from chess AI, figuring out better ways to improve themselves, either by playing them outright, using them for post-game analysis, or watching two bots play and see the kind of creative strategies they can come up with.

[-] technojamin@lemmy.world 1 points 11 months ago

While I agree that a lot of the hype around AI goes overboard, you should probably read this recent paper about AI classification: https://arxiv.org/abs/2311.02462

Systems like DeepMind are narrow AI, whereas LLMs are general AI.

[-] spark947@lemm.ee 1 points 11 months ago

Not really. The imementation of land is most the same, they just run continously on a per word (token) basis.

[-] 332@feddit.nu 1 points 11 months ago
this post was submitted on 23 Nov 2023
49 points (88.9% liked)

Technology

34821 readers
20 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS