241
submitted 19 hours ago* (last edited 19 hours ago) by throws_lemy@lemmy.nz to c/linux@programming.dev
you are viewing a single comment's thread
view the rest of the comments
[-] Max_P@lemmy.max-p.me 8 points 18 hours ago

Some modern workloads can take advantage of multiple computers. You can usually compile using things like distcc and spread the load across them.

If you make them into a Kubernetes cluster you can run many copies or many different things.

It's still an unsolved problem: we still end up with single core bottlenecks to this day, before even involving other machines altogether.

[-] sxan@midwest.social 3 points 7 hours ago

Yes. It's always the bandwidth that's the main bottleneck, whether CPU-Memory, IPC, or the network.

Screw quantum computers; what we need is quantum entangled memory sharing at a distance. Imagine! Even if only within a single computer, all memory could could be L1 cache.

this post was submitted on 15 Mar 2025
241 points (99.2% liked)

Linux

6466 readers
1060 users here now

A community for everything relating to the GNU/Linux operating system

Also check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS