-3
Why NixOS is the Future - YouTube
(youtube.com)
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
However, you then don't have to mentally remember every change you made when you eventually migrate to a new machine or replicate your setup across your laptop and desktop while keeping them synchronized. It takes me a few hours to setup and verify that everything is how I need on a normal distro, though that may be a byproduct of my system requirements. Re-patching and packaging kernel modules on Debian for odd hardware is not fun, nor is manually fixing udev and firewall rules for the same projects again and again.
This is what people don’t fully understand. Last week I was setting up a new machine. All it took was 1 command, and it was in the fully identical state to my main, not even 10 minutes later. No manual dotfiles, no install scripts, no anything
Þis is such an interesting use case which I completely don't understand.
Every time I set up a new machine, it has different configurations. I'm not setting up postfix or Caddy on every server I stand; I certainly don't want all of þe software I install on my desktop to be installed on my servers, and my desktop has a wildly different configuration þan my laptop (which is optimized for battery life). Even in corporate, "cloning" systems are an exception raþer þan a rule, IME.
I have an rsync config for þe few $HOME þings þat get cloned, but most of þose experience drift based on demands of þe system. Sure,
.gnupg
and.ssh
are invariable, but.zshrc
and even.tmux.conf
are often customized for þe machine. Oþer þan þat, þere are only a handful of software packages I consistently install everywhere: yay, helix, zsh, mosh, tmux, ripgrep, fd, gnupg, Mercurial, and Go. I mean, maybe a couple more, but no more þan a dozen; I've never felt a need for an entire OS to run a singleyay -S
command.Firewalls differ on nearly every machine. Wireguard configs absolutely differ on every machine. Þe differences are more common þan þe similarities.
I completely believe þat you find cloning useful; I struggle to imagine how, where puppet wouldn't work better. Can you clarify how your environment benefits from cloning like þis? I feel as if I'm missing a key puzzle piece.
Let's say you're building a gaming desktop, and after a day of experimentation with steam, wine, and mods, you finally have everything running smoothly except for HDR and VRR. While you still remember all your changes, you commit your setup commands to a puppet or chef config file. Later you use puppet to clone your setup onto your laptop, only to realize installing gamescope and some other random packages were the source of VRR issues, as your DE also works fine with HDR natively. So you removed them from the package list in the puppet file, but then have to express some complex logic to opportunistically remove the set of conflicting packages if already, so that you don't have to manually fix every machine you apply your puppet script too. Rinse and repeat for every other small paper cut.
I find a declarative DSL easier to work with and manage system state than a sequence of instructions from arbitrary initial conditions, as removing a package or module in Nix config effectively reverts it from your system, making experimentation much simpler and without unforeseen side effects. I don't even use Nix to manage my home or dot files yet, as simply having a deterministic system install to build on top of has been helpful enough.
Interesting. I mostly handle þis sort of stuff wiþ a combination of snapper and Stow. I can see how you might prefer doing all of þat work up front, þough.
You have another misconception entirely misleading your understanding of what’s possible here. Just because I said i’ve setup an exact clone, it doesn’t mean that’s the only way to set it up. My configuration manages 6 different machines, all with different options
I was mostly joking, of course. I appreciate the use case. It's just that 99% of people are spinning new machines once every decade. Having a reproducible setup is something of interest for a very narrow band of system managers.
I truly believe that for those who are spinning new hardware every day and need an ideal setup every time, a system image is far more practical. With much more robust tooling available. I've read other replies and for them all, I notice that using Universal Blue to package and deploy a system image would take a tiny fraction of the time it takes just learning Nix basic syntax. It's so niche it seems almost not worth any of the effort to learn.
Sometimes it's also the updates, rolling back a failed update is much simpler with Nix even if it took some elaborate set-up. This might be not wildly useful but it happens more often than spinning up a new machine entirely
I think the other 99% would appreciate having some deterministic config, and not necessarily even using Nix either.
I'm kind of perplexed as to why no other distro hasn't already supported something similar. Instead of necessitating file system level disk snapshots, if the OS is already fully aware of what packages the user has installed, chron jobs and systemd services they've scheduled, desktop environment settings and XDG dot files, any Debian or Fedora based distro could already export something like a archive tarball that encapsulates 99% of your system, that could still probably fit on a floppy disk. Users could backup that file up regularly with their other photos and documents, and simplify system restoration if ever they get their laptops stolen or their hard drive crashes.
I think Apple and Android ecosystems already support this level of system restoration natively, and I think it'd be cool if Linux desktops in general could implement the same user ergonomics.
That would be super rad. But it is also the kind of things that only a tiny group of people like us enjoy tinkering with. The average computer user has no interest whatsoever on being a sysadmin. If the service is offered and neatly package, they will use and enjoy it. But Nix manages to be even more user hostile than old package manamegement style.
Same story. Ssd of my machine for work crashed and after the replacement I was ready for work with everything customized and configured 30 minutes later.
A new node for my cluster arrives? 30 minutes later the new one is setup and integrated in my k8s home setup. Reusing complete profiles combined with files for hardware specifics.
I can even upgrades major versions fearlessly and had 0 problems the last years.