That depends entirely on your load. A common approach is sharding. Often memcache can help, too.
I work for a company that handles this In a few ways. We set up read replicas to handle large read queries. To offload the reads from the primary server. Data is replicated to the read replicas so reporting can be run from that server. And not add load to the primary server.
The second approach is sharding. Sharding breaks a large table into smaller, more manageable chunks, distributing them across systems. This reduces the burden on any one server, improves performance, and enables scaling out as data or traffic increases.
A common approach is something like a UV 3000.
Likely not what you want, but it is important to remember that there are ways to solve it with money.
By UV 3000 you probably don't mean the ultraviolet lamp that is the first page of Google is full of when searching with this term..? I doubt UV - whatever it is - is a common approach.
I apologize for not providing a link. (https://en.m.wikipedia.org/wiki/Altix) I am not quickly finding specs.
These were SGI Altix systems before HP bought SGI. Tightly integrated clusters such that they operated as a single NUMA space. They are/were often used to host databases with massive shared memory.
The smaller systems had lower numbers, and older had numerically lower numbers. A UV 1000 wad two models previous to the UV 3000. The UV 100 was same generation as 1000, but smaller.
If I recall correctly, the UV 100 had 3 TiB RAM. These are very old now, and only an example. The UV 3000 had way more RAM and CPUs.
A modern single non-UV server maxed out can hit over 1 TiB (I have not spected one in a while). Expect the single server to cost over $20K, anything less means one is in the wrong section of the store.
Edit: clarifying a point
Edit2: Just checked and a single server can hit 3 TiB RAM with 128 Cores for around $53K. Put that $53K in comparison with employee time for any other solution.
Sysadmin
A community dedicated to the profession of IT Systems Administration
No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world