52
submitted 7 months ago by Greyfoxsolid@lemmy.world to c/asklemmy@lemmy.ml

For anyone who knows.

Basically, it seems to me like the technology in mobile GPUs is crazier than desktop/laptop GPUs. Desktop GPUs obviously can do things better graphically, but not by enough that it seems to need to be 100x bigger than a mobile GPU. And top end mobile GPUs actually perform quite admirably when it comes to graphics and power.

So, considering that, why are desktop GPUs so huge and power hungry in comparison to mobile GPUs?

you are viewing a single comment's thread
view the rest of the comments
[-] Dudewitbow@lemmy.zip 41 points 7 months ago* (last edited 7 months ago)

because its based on a curve. laptops have maybe 85% of the performance their desktop counterpart has, becauae that last 15% of performance is not power efficient.

you are also disregarding one MAJOR factor when comparing desktop and laptop gpus, noise.

laptop gpus, especially high end ones can sound like jet engines. large desktop gpus are large to minimalize noise it makes.

e. g my 7700S in my framework 16 can sound like a jet engine, the desktop equivalent of that, which is a 7600, is ridiculously power efficient and barely will make a noise because of the heatsink/die size ratio.

[-] gramathy@lemmy.ml 10 points 7 months ago* (last edited 7 months ago)

Also the laptop gpus tend to have less or “worse” memory for a variety of reasons (lower resolution screens means less need for VRAM or processing powe, lower power GDDR, lower RAM clocks, etc. That 85% number works in more than just straight rendering throughput

[-] Dudewitbow@lemmy.zip 2 points 7 months ago

i wouldnt necessarily say that, there are times oems double ram capacity compared to their typical value on laptop, its just less common today than it used to because nvidia tax.

take for example back over a decade ago with maxwell, desktop 750tis on desktop were usually 2gb vram cards, even 1gb. on mobile, 860m/960m(the laptop equivalent) often had 4 gb vram varients. Laptop ram though will be clocked more conservatively.

[-] d3Xt3r@lemmy.nz -1 points 7 months ago

Also, AMD APUs use your main RAM, and some systems even allow you to change the allocation - so you could allocate say 16GB for VRAM, if you've got 32GB RAM. There are also tools which allow you can run to change the allocation, in case your BIOS does have the option.

This means you can run even LLMs that require a large amount of VRAM, which is crazy if you think about it.

[-] Blaster_M@lemmy.world 1 points 7 months ago* (last edited 7 months ago)

Problem is, system RAM does not have anywhere near the bandwidth that dedicated VRAM does. You can run an AI model, but the performance will be 10x worse due to the bandwidth limits.

this post was submitted on 02 Apr 2024
52 points (90.6% liked)

Asklemmy

43822 readers
906 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS