I used to use this when I still had a hard drive, but this does nothing for performance if you're on an SSD and profile writes are so few with browsers that it doesn't significantly affect drive wear. In the end, all this does is make it more likely that something will break.
Are you implying the tabs backups are not written into the profile folder? Because think 10 - 20 GB a day is still something to be convened about. https://www.servethehome.com/firefox-is-eating-your-ssd-here-is-how-to-fix-it/
I have used Firefox in ram for a couple of months now without problems and am pretty happy with it.
Can someone back up my claim that 10-20GB writes per day is nothing for a modern SSD?
Edit: with a 256 TBW and a 20GB write/day it gives some 13.000 days so the lifespan of an SSD will largely be the limiting factor.
Wrong. Using inotify-wait (inotify-tools), you see that FF has a bunch of read and write access on every page load (mostly in /storage). This is with the about:config option to use RAM as cache enabled.
Every single webbrowser is one giant clusterfuck.
Is there a specific package I can install to increase my RAM?
No I believe you have to download more RAM actually. But what would I know I'm just a proctologist.
No, this is wrong. I saw this documentary, 'Johmny Neumonic' I think, and it specifically showed a computer scientist increasing his storage and RAM through software, but you need a special device to plug in to do it. I'm sure Best Buy sells it.
Yes! They also showed the amount of RAM was just a guideline and it's possible to "overfill" your RAM!
No, you are also wrong and need to rewatch that documentary. Sheer will and determination will allow you to double your RAM.
Also befriending a drugged up DARPA dolphin will be a massive boon too.
No, you are also wrong.
As a proctologist I recommend against this.
Have a look at https://wiki.archlinux.org/title/Zram - a compressed block device in RAM that can be formatted as swap. There are various tools to set it up, maybe your distro already includes one of them. And htop has a meter for it, so you can see how effective the compression is (besides its own zramctl tool).
Nah i think the right way to do it is go to some site (you can Google some) and download some RAM. They even make the link flash so its easy to find. If you need more RAM just download some more
Finally, a way to use the loads of RAM I have other than Compiling and Blendering.
Well, I guess we also have RAM drives
Just reconfigured /etc/makepkg.conf to use extra cores and tmpfs.. I've been compiling on the SSD with one core for so long it's embarrassing.
While you're still in your makepkg.conf, don't forget to set march=native
(and remove mtune) in your CFLAGS
! (unless you're sharing your compiled packages with other systems)
Where's the difference between march=native
and march=x86-64
in that case?
A ton of difference! march
stands for microarchitecture levels (or feature levels). "x86-64" is the baseline feature set targeting common x86_64 instructions found in early 64-bit CPUs, circa 2003. Since 2003 obviously there have been several advancements in CPUs and the x86_64 arch, and these have been further classified as:
- x86-64-v2 (2008; includes the SSE3, SSE4 instructions and more)
- x86-64-v3 (2013; includes AVX, AVX2 and more)
- x86-64-v4 (2017; includes AVX512 mainly)
So if you're still on x86-64, you're missing out on some decent performance gains by not making use of all the newer instructions/optimisations made in the past two decades(!).
If you're on a recent CPU (2017+), ideally you'd want to be on at least x86-64-v3 (v4 has seemingly negligible gains, at least on Intel). There's also CPU-family specific marches such as znver4
for AMD Zen 4 CPUs, which would be an even better choice than x86-64-v4.
But the best march you want use is of course native
- this makes available all instructions and compiler optimisations that's specific to your particular CPU, for the best performance you can possibly get. The disadvantage of native
is that any binaries compiled with this can run only on your CPU (or a very similar one) - but that's only an issue for those who need to distribute binaries (like software developers), or if you're sharing your pkg cache with other machines.
Since the flags defined in makepkg.conf only affect AUR/manual source builds (and not the default core/extra packages), I'd recommend also reinstalling all your main packages from either the ALHP or CachyOS repos, in order to completely switch over to x86-64-v3 / v4.
Further reading on microarchitectures:
- https://www.androidauthority.com/what-is-x86-64-v3-3415395/
- https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels
Benchmarks:
cc: @luthis@lemmy.nz
Oh boy....
Total Download Size: 3390.65 MiB Total Installed Size: 13052.08 MiB Net Upgrade Size: 291.24 MiB
I wonder if I'm going to notice any better performance..
Can I also compile a list of selected packages from the repositories fresh easily? E.g. Firefox? Or do I have to download their PKGBUILD to makepkg?
The repositories already contain pre-compiled packages. To install them, just add the repository before the Arch repos, and then simply reinstall the packages to install their optimised versions.
How can I trust them? At least with Arch there's the "many eyes" principle.
Both CachyOS and ALHP are reasonably popular
never heard of them. I need to research a bit more until I activate what is basically another "dangerous" non-maintainer repository. Thank you a lot for your links and explanations!
holy shit!!! I'm definitely doing that!
Mount .cache as tmpfs. Rarely needs a workaround for some offenders to XDG spec though.
Btw the private browsing mode is also RAM-only which is a hard requirement for the Tor browser ("no disk policy")
thanks for reminding me. Didn't activate this on my new install since I got 64G of RAM :)
systemctl --user enable psd-resync.service
I think this is not needed since psd.service
has the following in it:
[Unit]
…
Wants=psd-resync.service
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0