35
top 34 comments
sorted by: hot top controversial new old
[-] Mynameisallen@lemmy.zip 34 points 1 week ago

This is what all the parts we wanted went to

[-] Earthman_Jim@lemmy.zip 13 points 1 week ago

Yeah, I wonder how long it will take them to clue in that no one wants to trade gaming for an AI fucking girlfriend ffs...

[-] Mynameisallen@lemmy.zip 8 points 1 week ago

Until the money stops pouring in I suppose

[-] Ebby@lemmy.ssba.com 5 points 1 week ago* (last edited 1 week ago)

Idk... I'm a tad excited to buy one for peanuts on eBay in a couple years for a local smart home upgrade. Heck, when the bubble pops, maybe they can sell power from all those generators back to the city and lower our utility bills too. /S

[-] foggenbooty@lemmy.world 1 points 1 week ago

I know you put /S, but for other people that read this it will not be an option. These only work in specialised servers that you will not he able to run at home (unless you're a mad scientists type).

[-] Earthman_Jim@lemmy.zip 1 points 1 week ago

They might be useful for rendering, and I'd love to see how smoothly Teardown Game could run with all those cores.

[-] roofuskit@lemmy.world 4 points 1 week ago

Don't worry, you can rent them for $30 a month and stream all your video games.

[-] Cocodapuf@lemmy.world 8 points 1 week ago

Jesus fucking Christ, 288GB. And this is why I can't have 16?

[-] Corkyskog@sh.itjust.works 4 points 1 week ago

And you have to buy a rack of them with 72 of them.

[-] xxce2AAb@feddit.dk 5 points 1 week ago

Goodbye, sweet hardware. You deserved better and so did we.

[-] RegularJoe@lemmy.world 5 points 1 week ago

Nvidia's Vera Rubin platform is the company's next-generation architecture for AI data centers that includes an 88-core Vera CPU, Rubin GPU with 288 GB HBM4 memory, Rubin CPX GPU with 128 GB of GDDR7, NVLink 6.0 switch ASIC for scale-up rack-scale connectivity, BlueField-4 DPU with integrated SSD to store key-value cache, Spectrum-6 Photonics Ethernet, and Quantum-CX9 1.6 Tb/s Photonics InfiniBand NICs, as well as Spectrum-X Photonics Ethernet and Quantum-CX9 Photonics InfiniBand switching silicon for scale-out connectivity.

[-] TropicalDingdong@lemmy.world 5 points 1 week ago

288 GB HBM4 memory

jfc..

Looking at the spec's... fucking hell these things probably cost over 100k.

I wonder if we'll see a generational performance leap with LLM's scaling to this much memory.

[-] AliasAKA@lemmy.world 7 points 1 week ago* (last edited 1 week ago)

Current models are speculated at 700 billion parameters plus. At 32 bit precision (half float), that’s 2.8TB of RAM per model, or about 10 of these units. There are ways to lower it, but if you’re trying to run full precision (say for training) you’d use over 2x this, something like maybe 4x depending on how you store gradients and updates, and then running full precision I’d reckon at 32bit probably. Possible I suppose they train at 32bit but I’d be kind of surprised.

Edit: Also, they don’t release it anymore but some folks think newer models are like 1.5 trillion parameters. So figure around 2-3x that number above for newer models. The only real strategy for these guys is bigger. I think it’s dumb, and the returns are diminishing rapidly, but you got to sell the investors. If reciting nearly whole works verbatim is easy now, it’s going to be exact if they keep going. They’ll approach parameter spaces that can just straight up save things into their parameter spaces.

Sure, but giant context models are still more prone to hallucination and reinforcing confidence loops where they keep spitting out the same wrong result a different way.

[-] boonhet@sopuli.xyz 4 points 1 week ago* (last edited 1 week ago)

LLMs can already use way more I believe, they don't really run them on a single one of these things.

The HBM4 would likely be great for speed though.

[-] panda_abyss@lemmy.ca 2 points 1 week ago

Yeah they’re going to cost as much as a house.

I think we’ll see much larger active portions of larger MOEs, and larger context windows, which would be useful.

The non LLM models I run would benefit a lot from this, but I don’t know of I’ll ever be able to justify the cost of how much they’ll be.

Fundamentally no, linear progress requires exponential resources. The below article is about AGI but transformer based models will not benefit from just more grunt. We're at the software stage of the problem now. But that doesn't sign fat checks, so the big companies are incentivized to print money by developing more hardware.

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

Also the industry is running out of training data

https://arxiv.org/html/2602.21462v1

What we need are more efficient models, and better harnessing. Or a different approach, reinforced learning applied to RNNs that use transformers has been showing promise.

[-] TropicalDingdong@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

Yeah I've read that before. I don't necessarily agree with their framework. And even working within their framework, this article is about a challenge to their third bullet.

I'm just not quite ready to rule out the idea that if you can scale single models above a certain boundary, you'll get a fundamentally different/ novel behavior. This is consistent with other networked systems, and somewhat consistent with the original performance leaps we saw (the ones I think really matter are ones from 2019-2023, its really plateaued since and is mostly engineering tittering at the edges). It genuinely could be that 8 in a MoE configuration with single models maxing out each one could actually show a very different level of performance. We just don't know because we just can't test that with the current generation of hardware.

Its possible there really is something "just around the corner"; possible and unlikely.

What we need are more efficient models, and better harnessing. Or a different approach, reinforced learning applied to RNNs that use transformers has been showing promise.

Could be. I'm not sure tittering at the edges is going to get us anywhere, and I think I would agree with just.. the energy density argument coming out of the dettmers blog. Relative to intelligent systems, the power to compute performance (if you want to frame it like that) is trash. You just can't get there in computation systems like we all currently use.

[-] in_my_honest_opinion@piefed.social 1 points 1 week ago* (last edited 1 week ago)

I mean what you're proposing was the initial push of gpt3. All the experts said, these GPTs will only hallucinate more with more resources and they'll never do anything more than repeat their training data as a word salad posing as novelty. And on a very macro scale, they were correct.

The scaling problem
https://arxiv.org/abs/2001.08361

The scaling hype
https://gwern.net/scaling-hypothesis

Ultimately, hype won out.

[-] yogurtwrong@lemmy.world 1 points 1 week ago

The buzzwords make my head hurt. Sounds like a copypasta

Almost like an LLM wrote it...

load more comments (-1 replies)
[-] Earthman_Jim@lemmy.zip 5 points 1 week ago
[-] FaceDeer@fedia.io 5 points 1 week ago

You're in a community called "Technology" and it's got a bunch of upvotes, so us, presumably.

[-] Earthman_Jim@lemmy.zip 3 points 1 week ago

The news is important, but when it comes to user-end AI in general, big fucking meh.

[-] gnawmon@ttrpg.network 4 points 1 week ago

so that's why my 5070 laptop has 8 GBs of VRAM...

my old 1080 also had 8 GBs of VRAM

[-] kittenzrulz123@lemmy.dbzer0.com 1 points 1 week ago

Your 5070 laptop has 8gb of vram? My desktop 3060 has 12gb of vram and its not even the TI version.

[-] zebidiah@lemmy.ca 4 points 1 week ago

THIS is why we can't have nice things....

[-] phoenixz@lemmy.ca 3 points 1 week ago

And none of us will be allowed to have them

Only datacenters and only fortune 500 companies will be able to use anything Nvidia

[-] Corkyskog@sh.itjust.works 2 points 1 week ago

I mean if you have the 3 million to spend on a rack of them, I am sure they would allow you to have them.

I do wonder what happens a few years down the road when everyone are replacing their gpus with latest and greatest variants what happens to the old racks? Do they get sold for pennies on the dollar because everyone else doing AI wants cutting edge?

[-] LoremIpsumGenerator@lemmy.world 2 points 1 week ago

So this is where our future ram buy went into? Fuck this planet then 🤣

[-] fubarx@lemmy.world 1 points 1 week ago

Question is, how long before it makes it to the next DGX Spark? Some people don't have $10B to burn.

[-] Hadriscus@jlai.lu 1 points 1 week ago

Can't wait for it to hit secondhand market in november

[-] RizzRustbolt@lemmy.world 1 points 1 week ago

But can it run Crysis?

this post was submitted on 26 Feb 2026
35 points (84.3% liked)

Technology

82456 readers
832 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS