180
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 Aug 2023
180 points (97.9% liked)
Explain Like I'm Five
14304 readers
4 users here now
Simplifying Complexity, One Answer at a Time!
Rules
- Be respectful and inclusive.
- No harassment, hate speech, or trolling.
- Engage in constructive discussions.
- Share relevant content.
- Follow guidelines and moderators' instructions.
- Use appropriate language and tone.
- Report violations.
- Foster a continuous learning environment.
founded 1 year ago
MODERATORS
Writing to an SSD damages the SSD, however things saved to an SSD are persistent, meaning the data isn't lost when the SSD doesn't get any power. Writing to RAM doesn't damage it and it is also quicker. However, data saved on RAM is not persistent, meaning that all data is lost as soon as the RAM is not connected to a power source. Also, RAM is a lot more expensive than SSD storage.
RAMs are already used to avoid writing to (or reading from) the SSD or HDD when possible, the concept is called "Caching"
Even if it's powered, RAM will lose its data on the order of a tenth of a second. RAM doesn't just require power, it requires that your computer constantly read and rewrite it - so every 64ms your computer has to read every gigabyte of RAM and write it back.
Doesn't the ram do that itself? Otherwise reading/writing all that data would waste tons of time for the CPU.
Yes - it's been the job of the DRAM controller for almost the entire history of computing. But that's still a part of the computer and if it stops working then your RAM will go blank in a fraction of a second
It's been a very long time since my computer engineering course, and we didn't cover this topic specifically, but I highly doubt it's a full dump and reload. What likely happens is each ram address has a ttl flag or some way for the CPU to know when to rewrite the data, and it does it as needed.
Plus, the bus between the CPU and ram is ridiculously fast. Your pc could dump and reload all of its ram in the time it takes you to blink. And, with multiple cores, the task can be allocated to a single core, or divided up among all of them.
Modern RAM just needs to be told to refresh. The device itself will go through the refreshing process. But the whole array needs to be refreshed, there's no LRU scheme to tell what bank or row was last accessed.
Starting with DDR3 it's not so easy. Density is so high that reading or writing one row affects cells in adjacent rows. Partial target row refresh (PTRR) counters this, where any access of a row is followed by a refresh of adjacent rows. Flaws in this process in early DDR3 controllers was at the heart of rowhammer exploits, where repeated accesses to a memory location could work out what's stored in physically adjacent memory, even if it's not privileged. IIRC DDR4 pulled the PTRR process into the RAM's built in refresh circuitry so it's transparent to the memory controller.
At least on older x86 motherboards, there used to be a dram refresh interrupt. It would trigger every 15 or so milliseconds and tell the dram controller to do a bus hold request and then refresh the ram. This bus hold request means the cpu can't access the ram when this happens (it can still run stuff in the cache) but at least you aren't wasting as much cpu time on dram refresh this way.