53
Meta genai org in panic mode (www.teamblind.com)
you are viewing a single comment's thread
view the rest of the comments
[-] MLRL_Commie@hexbear.net 5 points 1 week ago

But what is to limit Nvidia from just shifting to "wow now that more can be done with less, we will do even more with EVEN MORE CARDS" and apply the improvements from Deepseek to their own models while continuing to ramp up in the same way?

Eventually there will be a pop, but I genuinely don't get how this ushers in that pop. I guess the diminishing returns of these systems is hastened? As in, adding more cards with more resources will continuously have less effect. But companies will for now still want even the smallest edge by putting Deepseek-style-improved-AI on even bigger data centers.

[-] yogthos@lemmygrad.ml 5 points 1 week ago

That's what OpenAI thought originally when they started working on ChatGPT5, they figured they'd just make the model bigger and it's going to do more. Turns out that making the model bigger doesn't actually produce better results. We're also at a point now where most of the publicly available information has been scraped as well. Now the focus is turning towards improving algorithms for making sense of the data as opposed to just stuffing more data into the model. And this is a problem for Nvidia because current generation of chips is already good enough for doing this.

Of course, people will find ways to utilize more processing power as is always the case. But at least in the near term, this is no longer the bottleneck.

[-] MLRL_Commie@hexbear.net 3 points 1 week ago

Interesting, I was unaware that we had already hit the limits of used data. That makes sense then. I hope that your analysis is correct, it'd be good news

this post was submitted on 24 Jan 2025
53 points (100.0% liked)

technology

23499 readers
318 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS