[-] nico@r.dcotta.eu 3 points 9 months ago

I recommend starting with ZeroToNix's docs and then moving on to nixos.wiki, but here is a minimal, working example that I could deploy to a hetzner VPS that only has nix and ssh installed:

{ config, pkgs, ... }: {
  # generated, this will set up partitions and bootloader in a separate file
  imports = [ ./hardware-configuration.nix ];
  zramSwap.enable = true;
  networking.hostName = "miki";
  # configures SSH daemon with a public key so we can ssh in again
  services.openssh.enable = true;
  users.users.root.openssh.authorizedKeys.keys = [ ''ssh-ed25519 AAAAC3NzaC1lNDI1NTE5AAAAIPJ7FM3wEuWoVuxRkWnh9PNEtG+HOcwcZIt6Qg/Y1jka'' ];
  # creates a timmy user with sudo access and wget installed
  users.users.timmy = {
    isNormalUser = true;
    extraGroups = [ "networkmanager" "wheel" "sudo" ];
    packages = with pkgs; [ wget ];
  };
  # open up SSH port
  networking.firewall.allowedTCPPorts = [ 22 ];
  # start nginx, assumes HTML is present at `/var/www`
  services.nginx = {
    enable = true;
    virtualHosts."default" = {
      forceSSL = true;            # Redirect HTTP clients to an HTTPs connection
      default = true;             # Always use this host, no matter the host name
      root = /var/www;        # Set the web root to ser
    };
  };
  system.stateVersion = "22.11";
}

This sets up a machine, configures the usual stuff like the ssh daemon, creates a user, and sets up an nginx server. To deploy it you would run nixos-rebuild --target-host root@10.0.0.1 switch. Other tools exist (I use colmena but the idea is the same). Note how easy it was to set up nginx! If I was setting Nomad up, I would just do services.nomad.enable = true.

As you can see some things you will have to learn (the nix language, what the configs are...) but I think it is worth it.

[-] nico@r.dcotta.eu 2 points 9 months ago

I struggled a bit to get it up and running well, but now I am happy with it. It's not too hard to deploy (at least easier than the alternatives), it has CSI which for me was big, and it has erasure coding. The dev that maintains it (yes, the one dev) is very responsive.

It has trade offs, so depending on your needs, I recommend it. Backing store for stateful workloads like postgres DBs? Absolutely not. Large S3 store (with an option for filesystem mount) for storing lots of files? Yes! In that regard it's good for stuff like Lemmy's pictrs or immich. I use it as my own Google drive. You can easily replicate in your own cluster, or back it up to an external cloud provider. You can mount it via FUSE on your personal machine too.

Feel free to browse through my setup - if you have specific questions I am happy to answer them.

[-] nico@r.dcotta.eu 4 points 9 months ago* (last edited 9 months ago)

I see no one else commented my stack, so I suggest:

Nomad for managing containers if you want something high availability. Essentially the same as k8s but much much much simpler to deploy, learn, and maintain. Perfect for homelabs imo. Most of the concepts of Nomad translate well to k8s if you do want to learn it later. It integrates really well with Terraform too if you are also hoping to learn that, but it's not a requirement.

NixOS for managing the bare metal. It's a lot more work to learn than say, Debian, but it is just as stable, and all configuration will be defined as code, down to the bootloader config (no bash scripts!). This makes it super robust. You can also deploy it remotely. Once you grow beyond a handful of nodes it's important to use a config management tool, and Nix has been by far my favourite so far.

If you really want everything to be infra-as-code, you can manage cloud providers via Terraform too.

For networking I use wireguard, and configure it with NixOS. Specifically, I have a mesh network where every node can reach every node without extra hops. This is a requirement if you don't want a single point of failure (hub and spoke) to disconnect your entire cluster.

Everything in my setup is defined 'as-code', immutable, and multi-node (I have 7 machines) which seems to be what you want, from what you say in your post. I'll leave my repo here, and I'm happy to answer questions!

--

My opinions on the alternatives:

Docker compose is great but doesn't scale if you want high availability (ie, have a container be rescheduled on node failure). If you don't want higher availability, anything more than docker might be overkill.

Ansible and Puppet are alright but are super stateful, and require scripting. If you want immutability you will love Nix/NixOS

k8s works (I use it at work) but is extremely hard to get right, even for well-resourced infra teams. Nomad achieves the same but with the leanings of having come afterwards, and without the history.

[-] nico@r.dcotta.eu 3 points 11 months ago* (last edited 11 months ago)

I have a similar use case where I also need my records to change dynamically.

Leng doesn't support nsupdate (feel free to make an issue!), but it supports changing the config file at runtime and having leng reread it by issuing a SIGUSR1 signal. I have not documented this yet (I'll get to it today), but you can see the code here

Alternatively, you can just reload the service like you do with pihole - I don't know how quick pihole is to start, but leng should be quick enough that you won't notice the interim period when it is restarting. This is what I used to do before I implemented signal reloading.

Edit: my personal recommendation is you use templating to render the config file with your new records, then reload via SIGUSR1 or restart the service. nsupdate would make leng stateful, which is not something I desire (I consider it an advantage that the config file specifies the server's behaviour exactly)

[-] nico@r.dcotta.eu 12 points 11 months ago

I am working on adding a feature comparison to the docs. But in the meantime: leng has less features (like no web UI, no DHCP server) which means it is lighter (50MB RAM vs 150MB for adguard, 512MB for pihole), and easier to reproducibly configure because it is stateless (no web UI settings).

I believe blocky and coredns are better comparisons for leng than "tries to achieve it all" solutions like adguard, pihole...

[-] nico@r.dcotta.eu 3 points 11 months ago

If it's helpful to you it's helpful in reality!

If you are having trouble installing or the documentation is not clear, feel free to point it out here or in the issues on github. Personally I think it is simplest to use docker :)

[-] nico@r.dcotta.eu 5 points 11 months ago

Thanks! I didn't know you could do that. I'll see how it compares to my current solution

[-] nico@r.dcotta.eu 2 points 11 months ago

Including SRV records? I found that some servers (blocky as well) only support very basic CNAME or A records, without being able to specify parameters like TTL, etc.

I also appreciate being able to define this in a file rather than a web UI

[-] nico@r.dcotta.eu 5 points 11 months ago

Ouch, thanks for catching that! Should be good now. Link here for the curious

[-] nico@r.dcotta.eu 3 points 11 months ago

Like chiisana@lemmy.chiisana.net said - I want to be able to add my own records (SRV, A, CNAME...) so that I can point to the services hosted in my VPN. CoreDNS is good for this but it doesn't also do adblocking. If PiHole can do this, I don't know how.

I also don't need a web UI, DHCP server, and so on: I just want a config file and some prometheus metrics

[-] nico@r.dcotta.eu 4 points 11 months ago

Yes (much simpler) and also allows you to specify custom DNS, which is very useful for more advanced self-hosted deployments - this is something PiHole is just not built to address

157
submitted 11 months ago by nico@r.dcotta.eu to c/selfhosted@lemmy.world

A few months ago I went on a quest for a DNS server and was dissatisfied with current maintained projects. They were either good at adblocking (Blocky, grimd...) or good at specifying custom DNS (CoreDNS...).

So I forked grimd and embarked on rewriting a good chunk of it for it to address my needs - the result is leng.

  • it is fast
  • it is small
  • it is easy
  • you can specify blocklists and it will fetch them for you
  • you can specify custom DNS records with proper zone file syntax (SRV records, etc)
  • it supports DNS-over-HTTPS so you can stay private
  • it is well-documented
  • can be deployed on systemd, docker, or Nix

I have been running it as my nameserver in a Nomad cluster since! I plan to keep maintaining and improving it, so feel free to give it a try if it also fulfils your needs

20
submitted 1 year ago by nico@r.dcotta.eu to c/selfhosted@lemmy.world
[-] nico@r.dcotta.eu 2 points 1 year ago

There are dozens of us!

  • nomad fmt was applied already - granted it is not a small easy to read job file, it might be easier to split it up into separate jobs
  • I will look into making this into a Pack - I have never built one because I have never shared my config like this before. I don't know how popular they are among selfhosters either!

I think an easy first step would be to contribute a sample job file like this into the Lemmy docs website. Then people can adapt to their setups. I find there is a lot more to configure in Nomad than in Docker compose for example because you stop assuming everything will be in a single box, which changes networking considerably. There is also whether to use Consul, Vault etc.

44
submitted 1 year ago by nico@r.dcotta.eu to c/selfhosted@lemmy.world

I am selfhosting Lemmy on a home Nomad cluster - I wrote the job files from scratch because I did not find anybody else who attempted the same.

I thought I'd share them and maybe they will serve as a starting point for someone using a similar selfhosted infra!

Nomad brings a few benefits from Lemmy specifically over Ansible/Docker, most notably some horizontal scaling across more than one machine.

Feedback welcome!

view more: next ›

nico

joined 1 year ago