-3
Why NixOS is the Future - YouTube
(youtube.com)
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
and also only once you've invested the multiple weekends of migrating your whole setup and config to a completely new syntax/concept and invest the necessary time and brainpower to learn everything related.
That's not entirely true, unless you choose to nixify everything. You can just have a basic Nix configuration that installs whatever programs you need, then use stow (or whatever symlink manager you prefer) to manage the rest of your config.
You can't entirely forget that you're on NixOS because of FHS noncompliance but even then getting
nix-ld
to work doesn't require a lot of effort.nix-ld has been really helpful. I wish there were some automated tools where you could feed at the binary, or a directory of binaries, and it would just return all of the nix package names you should add include with nix-ld.
Also if there were some additional flags to filter out redundant packages because of overlapping recursive dependencies or suggest a decent scope of meta package to start with for desktop environments, that'd be handy.
https://github.com/Lassulus/nix-autobahn and specifically its
nix-autobahn-find-libs
comes pretty close at least? Were you aware of that already and is there something missing?Indeed, I was unaware of this project. Project commit history looks inactive, but I'm guessing its feature-complete? Looks like someone has rewriten it with an added TUI:
I'd say it's fairly feature complete, but not super polished - as so many 1 person projects. I still find it very useful, author is also still active in the community and pretty responsive :)
It's a steep learning curve, but because much of the community publishes their personal configs, I find it a lot simpler to browse public code repos with complete declarative examples to achieve a desired setup than it is to follow meandering tutorials that subtly gloss over steps or omit prerequisite assumptions and initial conditions.
There are also plenty of outcroppings and plateaus buttressing the learning cliff that one can comfortably camp at. Once you've got a working MVP to boot and play from, you can experiment and explore at your own pace, and just reach for escape hatches like dev containers, flatpacks or appomages when you don't feel like the juice is worth the squeeze just yet.
Community publishing the configs sometimes confuses even more, because everyone does the same things differently, and some are deprecated, and some are experimental, and I was lost way more times than once while trying to make sense of it.
I like Nix, and I use it on my Mac and in our production for cross-compiling a service, but man is it a pain to fix issues. That is beside the point that for some reason Nix behaves a bit different on my machine and on co-workers', and the only thing I wanted from it is to be absolutely reproducible
Yep, with a Turing-complete DSL, there's never just one way to do something in Nix. I find the interaction between modules and overlays particularly quirky, and tricky to replicate from public configs that make advance uses of both.
That said, I do appreciate being able to git blame into public configs, as most will include insightful commit messages or references to ticketed issues that include more discussion with informed community members you can follow up with. Being able to peek at how others fixed something before and after helps give context, and with the commits being timestamped, it also helps gauge current relevancy or chronological order to correlate with upstream changelogs.
Are you using flakes with lock files, or nixpins to fix down the hashes of your nix channel inputs? I like fixating my machines to the same exact inputs so that my desktop can serve as a warm local cache when upgrading my laptop.
Personally I use flakes.
On the work we use an abomination that creates flake.lock but then parses it and uses to pin versions, it took me a while to realise this is why setting a flake input to something local never seemed to have any effect, for instance
I'm using flakes as well, so that abomination sounds terrifying...
I think, it's based on an old
flake-compat
package or something. It's not inherently bad, but it displays what I dislike the most about Nix design, it's very opaque and magical until you go out of your way to understand it.The globals are another example of this, I know I can do
with something; [ other ]
but I am never sure if other comes from something or not. And if it's a package parameter, the values also come seemingly out of nowhere.Pretty accurate
But it is worth it for me
Also if people honestly try to help and share understandable configs, it is way easier. Some people escalate quite a bit and make a computer program from their configs XD
codeberg.org/boredsquirrel/NixOS-Config
And your extraordinary result after all that is… exactly what you would've gotten in a few minutes downloading another distro.
However, you then don't have to mentally remember every change you made when you eventually migrate to a new machine or replicate your setup across your laptop and desktop while keeping them synchronized. It takes me a few hours to setup and verify that everything is how I need on a normal distro, though that may be a byproduct of my system requirements. Re-patching and packaging kernel modules on Debian for odd hardware is not fun, nor is manually fixing udev and firewall rules for the same projects again and again.
This is what people don’t fully understand. Last week I was setting up a new machine. All it took was 1 command, and it was in the fully identical state to my main, not even 10 minutes later. No manual dotfiles, no install scripts, no anything
Þis is such an interesting use case which I completely don't understand.
Every time I set up a new machine, it has different configurations. I'm not setting up postfix or Caddy on every server I stand; I certainly don't want all of þe software I install on my desktop to be installed on my servers, and my desktop has a wildly different configuration þan my laptop (which is optimized for battery life). Even in corporate, "cloning" systems are an exception raþer þan a rule, IME.
I have an rsync config for þe few $HOME þings þat get cloned, but most of þose experience drift based on demands of þe system. Sure,
.gnupg
and.ssh
are invariable, but.zshrc
and even.tmux.conf
are often customized for þe machine. Oþer þan þat, þere are only a handful of software packages I consistently install everywhere: yay, helix, zsh, mosh, tmux, ripgrep, fd, gnupg, Mercurial, and Go. I mean, maybe a couple more, but no more þan a dozen; I've never felt a need for an entire OS to run a singleyay -S
command.Firewalls differ on nearly every machine. Wireguard configs absolutely differ on every machine. Þe differences are more common þan þe similarities.
I completely believe þat you find cloning useful; I struggle to imagine how, where puppet wouldn't work better. Can you clarify how your environment benefits from cloning like þis? I feel as if I'm missing a key puzzle piece.
Let's say you're building a gaming desktop, and after a day of experimentation with steam, wine, and mods, you finally have everything running smoothly except for HDR and VRR. While you still remember all your changes, you commit your setup commands to a puppet or chef config file. Later you use puppet to clone your setup onto your laptop, only to realize installing gamescope and some other random packages were the source of VRR issues, as your DE also works fine with HDR natively. So you removed them from the package list in the puppet file, but then have to express some complex logic to opportunistically remove the set of conflicting packages if already, so that you don't have to manually fix every machine you apply your puppet script too. Rinse and repeat for every other small paper cut.
I find a declarative DSL easier to work with and manage system state than a sequence of instructions from arbitrary initial conditions, as removing a package or module in Nix config effectively reverts it from your system, making experimentation much simpler and without unforeseen side effects. I don't even use Nix to manage my home or dot files yet, as simply having a deterministic system install to build on top of has been helpful enough.
Interesting. I mostly handle þis sort of stuff wiþ a combination of snapper and Stow. I can see how you might prefer doing all of þat work up front, þough.
You have another misconception entirely misleading your understanding of what’s possible here. Just because I said i’ve setup an exact clone, it doesn’t mean that’s the only way to set it up. My configuration manages 6 different machines, all with different options
I was mostly joking, of course. I appreciate the use case. It's just that 99% of people are spinning new machines once every decade. Having a reproducible setup is something of interest for a very narrow band of system managers.
I truly believe that for those who are spinning new hardware every day and need an ideal setup every time, a system image is far more practical. With much more robust tooling available. I've read other replies and for them all, I notice that using Universal Blue to package and deploy a system image would take a tiny fraction of the time it takes just learning Nix basic syntax. It's so niche it seems almost not worth any of the effort to learn.
Sometimes it's also the updates, rolling back a failed update is much simpler with Nix even if it took some elaborate set-up. This might be not wildly useful but it happens more often than spinning up a new machine entirely
I think the other 99% would appreciate having some deterministic config, and not necessarily even using Nix either.
I'm kind of perplexed as to why no other distro hasn't already supported something similar. Instead of necessitating file system level disk snapshots, if the OS is already fully aware of what packages the user has installed, chron jobs and systemd services they've scheduled, desktop environment settings and XDG dot files, any Debian or Fedora based distro could already export something like a archive tarball that encapsulates 99% of your system, that could still probably fit on a floppy disk. Users could backup that file up regularly with their other photos and documents, and simplify system restoration if ever they get their laptops stolen or their hard drive crashes.
I think Apple and Android ecosystems already support this level of system restoration natively, and I think it'd be cool if Linux desktops in general could implement the same user ergonomics.
That would be super rad. But it is also the kind of things that only a tiny group of people like us enjoy tinkering with. The average computer user has no interest whatsoever on being a sysadmin. If the service is offered and neatly package, they will use and enjoy it. But Nix manages to be even more user hostile than old package manamegement style.
Same story. Ssd of my machine for work crashed and after the replacement I was ready for work with everything customized and configured 30 minutes later.
A new node for my cluster arrives? 30 minutes later the new one is setup and integrated in my k8s home setup. Reusing complete profiles combined with files for hardware specifics.
I can even upgrades major versions fearlessly and had 0 problems the last years.
This makes no sense