10

Hello everyone,

I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don't (just) want tell my point of view but thought about a series of posts:

Your favourite piece of selfhosting

I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:

Operating systems and/or type 1 hypervisors

You don't have to be an expert or a professional. You don't even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn't? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?

I am eager to hear about your thoughts and stories in the comments!

And please also give me feedback to this idea in general.

top 50 comments
sorted by: hot top controversial new old
[-] Humanius@lemmy.world 5 points 2 months ago* (last edited 2 months ago)

OS: Unraid

It's primarily NAS software, with a form of software raid functionality built in.
I like it mainly because it works well and the GUI makes is very easy to use and work with.

On top of that you can run VMs and docker containers, so it is very versatile as well.

I use it to host the following services on my network:

  • Nextcloud
  • Jellyfin
  • CUPS

It costs a bit of money up-front, but for me it was well-worth the investment.

[-] huquad@lemmy.ml 1 points 2 months ago

+1 for unraid. Nice OS that let's me easily do what I want

load more comments (1 replies)
[-] LiveLM@lemmy.zip 3 points 2 months ago

A friend recommended me OpenSuse MicroOS, and it has been a great experience!
It's a atomic OS designed to be just enough to run containers and it does it perfectly. It updates and reboots itself automatically so I never have to worry about it.
IMO, perfect for a home environment, just wish the documentation was better.

[-] sugar_in_your_tea@sh.itjust.works 3 points 2 months ago

openSUSE MicroOS

I've only tried it out on a VPS, so I'm not completely sold on it yet, but I do think I'll be switching to it eventually. I'm currently on Leap, but since almost everything is containerized, I'm not getting much benefit from the slow release cycle.

For your questions:

Why would you want to try it out? Did you try it out already? What worked great? What didn’t

The main appeal is unattended, atomic updates using bleeding edge packages. You keep your apps as separate from the base system as possible (containerized), and the base handles itself.

My main issue is with the toolbox utility, which runs a container to hold userland utilities for debugging stuff. So far, it has been buggy with the underprivileged user I configured, and I'd really rather not login as root. I've worked around it for now, but it leaves a lot to be desired.

Where are you stuck right now? What are your next steps?

Mostly figuring out how I want to handle my VPN (for exposing LAN services to the outside world) config. My options are:

  • containerize, and configure iptables rules to route traffic properly
  • install the needed tools to the base system and configure it on the host

The main sticking point is that I need HAProxy in front and route traffic to the given device, so the VPN and HAProxy need to talk. The easiest solution is to put both on the host, but that breaks the whole point of MicroOS. The ideal is to have both the VPN and HAProxy containerized, but I ran into some issues with podman.

Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?

This is definitely a veteran system right now, but I think it's ideal because it means I can completely automate system updates and not worry about my apps breaking. It also means I can automate setting up a new server (say, if I move to a different VPS) or even new OS since I only need to deploy my containers and don't need anything special from the OS setup.

I'm also playing with Aeon on my laptop, but that'd going a lot less smoothly than MicroOS on the server.

[-] xinayder@infosec.pub 3 points 2 months ago

I use openSUSE MicroOS as the container host, with podman. It was a bit tricky to install it in my Hetzner VPS and get used to how MicroOS handles system updates (it's an immutable system), but I am quite happy with it. I found it interesting and decided to try out so I could learn how to use the system.

[-] harsh3466@lemmy.ml 3 points 2 months ago* (last edited 2 months ago)

I've been using Ubuntu server on my server for close to a decade now and it has been just rock solid.

I know Ubuntu gets (deserved) hate for things like snaps shenanigans, but the LTS is pretty great. Not having to worry about a full OS upgrade for up to 10 years (5 years standard, 10 years if you go Ubuntu pro (which is free for personal use)) is great.

A couple times I've considered switching my server to another distro, but honestly, I love how little I worry about the state of my server os.

[-] node815@lemmy.world 3 points 2 months ago

I have been using Proxmox VE with Docker running on the host not managed by Proxmox, and then Cockpit to manage NFS Shares with Home Assistant OS running in a VM. It's been pretty rock solid. That was until I updated to Version 9 last night, it's been a nightmare getting the docker socket to be available. I think Debian Trixie may have some sort of extra layers of protection, I haven't investigated it too much, but my plan tomorrow and this week is to migrate everything to Debian 12 as that's the tried and true OS for me and I know it's quite stable with Cockpit, docker and so forth with KVM for my Home Assistant installation.

One other OS for consideration if you are wanting to check it out is XCP-NG which I played with and Home Assistant with that was blazing fast, but they don't allow NFS shares to be created and using existing data on my drives was not possible, so I would've had to format them .

[-] brygphilomena@lemmy.dbzer0.com 3 points 2 months ago

I used to really like esxi, but broadcom screwed us on that.

Hyper-v sucks to run and manage. It's also pretty bloated.

Proxmox is pretty awesome if you want full VMs. I'm gonna move everything I have onto it eventually.

For ease of use, if you have Synology that can run containers, it's okay.

I also like and tend to use unraid at my house, but that's more because of my insane storage requirements and how I upgrade with dissimilar disks fairly frequently. (I'm just shy of 500tb and my server holds 38 disks.)

[-] cheesemoo@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

Damn, 38 disks! How do you connect them all? Some kind of server hardware?

Curious because I'm currently using all 6 SATA ports on an old consumer motherboard and not sure how I'll be able to expand my storage capacity. The best option I've seen so far would probably be adding PCIe SATA controller(s), but I can't imagine having enough PCIe slots to reach 38 disks that way! Wondering if there's another option I haven't seen yet.

load more comments (3 replies)
[-] randombullet@programming.dev 2 points 2 months ago

Anything that can run proxmox is running proxmox. Even if it's a single OS running on it, it's still running proxmox

[-] lepire@lemmy.world 2 points 2 months ago

Maybe crazy, but I've been running flatcar lately. Automatic OS updates are nice and I pretty much exclusively use most of my machines to run containers.

[-] savvywolf@pawb.social 2 points 2 months ago

I've been using NixOS on my server. Having all the server's config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.

I don't use it on my personal machine because the lack of fhs feels like it'd be a problem, but when selfhosting most things are popular enough to have a module already.

[-] confusedpuppy@lemmy.dbzer0.com 2 points 2 months ago

I've been using Alpine Linux. I've always leaned towards minimalism in my personal life so Alpine seems like an appropriate fit for me.

Since what is installed is intentional, I am able to keep track of changes more accurately. I keep a document for complete setup by hand, then reduce that to an install script so I can get back to the same state in a minimal amount of time if needed.

Since I only have a Laptop and two Raspberry Pi's with no intention of expanding or upgrading, this works for me as a personal hobby.

I've even gone as far as to use Alpine Sway as a desktop to keep everything similar as well.

I wouldn't recommend it for anyone who doesn't have the time to learn. It doesn't use systemd and packages are often split meaning you will have to figure out what additional packages you may need beyond the core package.

I appreciate the approach Alpine takes because from a security point of view, less moving parts means less surface area to exploit. In today's social climate, who knows how or when I'll become a target.

[-] Damage@feddit.it 2 points 2 months ago* (last edited 2 months ago)

No love for Open Media Vault? I run it virtualized under Proxmox and I'm quite happy with it, not very fancy but super stable.

I run about twenty containers on OMV, with 4 8tb drives in a ZFS ZRAID5 setup. I love how users can be shared across services, for example the same user may access SMB shares or connect via OpenVPN.

[-] rtxn@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

+1 for OMV. I use it at work all the time to serve Clonezilla images through an SMB share. It's extremely reliable. The Clonezilla PXE server is a separate VM, but the toolkit is available in the clonezilla package, and I could even integrate the two services if I felt particularly masochistic one day.

My first choice for that role was TrueNAS, but at the time I had to use an old-ass Dell server that only had hardware RAID, and TrueNAS couldn't use ZFS with it.

[-] overload@sopuli.xyz 2 points 2 months ago* (last edited 2 months ago)

I use TrueNAS SCALE at home on my NAS and since they ditched kubernetes (and Truecharts, which was a happy little accident) it's been great.

It's free.

New hardware is incorporated into the kernel reasonably regularly IMO.

ZFS file system

Pretty easy to control with GUI exclusively

Docker is now very easy to use, images are community supported mostly but I've not had issues with Jellyfin, *arr, pihole, reverse proxy etc.

[-] DrunkAnRoot@sh.itjust.works 1 points 2 months ago

debian very simple an classic but i started using bsds recemtly

[-] jhdeval@lemmy.world 1 points 2 months ago

I use Debian as well for all my servers whether they are a VM or container. It is light weight, well supported and dead stable.

[-] fixmycode@feddit.cl 1 points 2 months ago

Debian on the servers, Diet-Pi on the SBC's, all containerized.

[-] xavier666@lemmy.umucat.day 1 points 2 months ago* (last edited 2 months ago)

Stage 1: Ubuntu server

Stage 2: Ubuntu server + docker

Stage 3: Ansible/OpenTofu/Kubernetes

Stage 4: Proxmox

[-] Dran_Arcana@lemmy.world 1 points 2 months ago

Don't get me wrong, I use libvrt where it makes sense but why would anyone go to proxmox from a full iac setup?

I do 2 at home, and 3 at work, coming from 4 at both and haven't looked back.

load more comments (3 replies)
[-] azron@lemmy.ml 1 points 2 months ago* (last edited 2 months ago)

Kubernetes is overkill for most things not just self hosting. If you need to learn it great otherwise don't waste your time on it. Extremely complicated given what it provides.

[-] PmMeFrogMemes@lemmy.world 1 points 2 months ago

fr, unless you're horizontally scaling something or managing hundreds of services what's the point

[-] PlutoniumAcid@lemmy.world 1 points 2 months ago

I agree with this thread, but to answer your question I think the point is to tinker with it j "just because". We're all in this for fun, not profit.

[-] Appoxo@lemmy.dbzer0.com 1 points 2 months ago

Hypervisor: Proxmox (fuck Hyper-V: It's good but soo annoying. Fuck ESXi cuz Broadcom).

General purpose OS (for servers): Debian (and OMV)

[-] napkin2020@sh.itjust.works 1 points 2 months ago

Rocky Linux. Been using debian but I like firewalld a bit more than ufw, and I don't trust myself enough to let myself touch iptable.

load more comments (2 replies)
[-] curbstickle@lemmy.dbzer0.com 1 points 2 months ago* (last edited 2 months ago)

Proxmox all day, every day.

Generally speaking I start with Debian and install proxmox on top rather than use their installer, this way I can config things as I want them before getting proxmox going, which I guess counts as a more advanced user use case, though not really complicated.

Edit: and if it wasn't obvious, everything is Debian, even those not on proxmox (which is just debian anyway, and isn't much tbh).

[-] tehWrapper@lemmy.world 1 points 2 months ago

Been using debian for 25 years.

[-] rtxn@lemmy.world 1 points 2 months ago

PVE running on a pile of e-waste. Most of the parts are leftovers from my parents' old PC that couldn't handle Win10. Proxmox loves it. Even the 10GB mis-matched DDR3 memory. The only full VM is OPNSense (formerly pfSense), everything else runs inside Debian containers. It only struggles when Jellyfin has to transcode something because I don't have a spare GPU.

[-] tofu@lemmy.nocturnal.garden 1 points 2 months ago

Best type of homelab! Just use what's there

[-] cRazi_man@europe.pub 1 points 2 months ago* (last edited 2 months ago)

I'm new to all this.

Synology: I was using Synology before and getting started with trying some Docker containers. The Synology was very underpowered and containers kept crashing or being shut down (from resources running out I guess) so I wanted to upgrade.

Comments seemed to suggest it is best to keep the Synology as purely a NAS and use a mini PC for compute, so that's what I went for. Got a 12th Gen Intel mini PC pretty cheap on eBay to play around with.

Debian - I've put Debian with KDE on the mini PC server. I was looking into TrueNAS or Unraid to consist what I should try learning. My brother (rightly) said there's no reason to over complicate things when I don't need functions of those OS and don't understand them. The one place the Linux community seems to be united is in recommending Debian for a server for being rock solid and stable. I've been very happy with it.

Spent my week off figuring out Docker, mounting NAS drives on the server PC, troubleshooting the problems. Got a setup I'm really happy with and I'm really happy I went with Debian.

[-] credics@sh.itjust.works 1 points 2 months ago

I have pretty much the same setup. Works like a charm.

[-] cRazi_man@europe.pub 1 points 2 months ago

What are you running on your server? I'm looking for more ideas.

I've got loads of stuff up and running, but now it is all quietly functional and I'm withdrawing from the enjoyment if setting up something new. I've recently had to delete a couple of Docker apps which weren't really very useful for me, but I enjoyed setting them up and liked seeing a long list of healthy containers in Dockge.

[-] credics@sh.itjust.works 1 points 2 months ago

Immich, paperless, Bitwarden, and a static website with recipes. I am very happy with all of them. Next projects are Forgejo, obsidian live sync (via CouchDB) and a budgeting software (not decided yet)

[-] cRazi_man@europe.pub 1 points 2 months ago

Notes app is a good idea. I might have a look at options.

Actual is working really well for me for budgeting.

[-] vegetaaaaaaa@lemmy.world 1 points 2 months ago
  • Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
  • VMs: Debian stable
  • podman if you need containerization below that
[-] BlueEther@no.lastname.nz 1 points 2 months ago

My setup is PVE on the bottom with TrueNAS core for NAS functions as a VM (with a passed through HBA)

[-] SidewaysHighways@lemmy.world 1 points 2 months ago

this has been pretty sweet! i just wish the hba didn't take so long to boot

[-] Sammirr@aussie.zone 1 points 2 months ago

I've several Debian stable servers operating in my stack. Almost all of them host a range of VMs in addition to a plethora of containers. Some house large arrays, others focus on application gruntwork. I chose Debian because I know it, been using it since the early 00s. It's👌.

[-] lena@gregtech.eu 1 points 2 months ago

Ubuntu Server. It just works.

[-] wraith@lemdro.id 1 points 2 months ago

I think this is a great idea. With such a foundational deployment concept like OS there are so many options and each can change the very core of one's self hosted journey. And then expanding to different services and the different ways to manage everything could be a great discussion for every existence level.

I myself have been considering Proxmox with LXCs deployed via the Community Scripts repo versus bare metal running a declarative OS with Docker compose or direct packages versus a regular Ubuntu/Debian OS with Docker compose. I am hoping to create a self-documenting setup with versioning via the various config and compose files, but I don't know what would end up being the most effective for me.

I think my overarching deployment strategy is portability. If it's easy to take a replacement PC, get a base install loaded, then have a setup script configure the base software/user(s) and pull config/compose files and start services, and then be able to swap out the older box with minimal switchover or downtime, I think that's my goal. That may require several OS tools (Ansible, NixOS config, Docker compose, etc.) but I think once the tooling is set up it will make further service startups and full box swaps easier.

Currently I have a single machine that I started spinning up services with Docker compose but without thought to those larger goals. And now if I need to fiddle with that box and need to reboot or take it offline then all my services go down. I think my next step is to come up with a deployment strategy that remains consistent, but I use that strategy to segment services across several physical machines so that critical services (router, DNS, etc.) wouldn't be affected if I was testing out a new service and accidentally crashed a machine.

I love seeing all the different ways folks deploy their setups because I can see what might work well for me. I'm hoping this series of discussions will help me flesh out my deployment strategy and get me started on that migration.

[-] nitrolife@rekabu.ru 1 points 2 months ago* (last edited 2 months ago)

archlinux + podman / libvirtd + nomad (libvirt and docker plugins) + ansible / terraform + vault / consul sometimes

UPD:

archlinux - base os. You never need change major version and that is great. I update core systems every weekend.

podman / libvirtd - 2 types of core abstractions. podman - docker containers management, libvirtd - VM management.

nomad - Hashicorp orcestrator. You can run exec, java application, container or virtual machine on one way with that. Can integrate with podman and libvirtd.

ansible - VM configuration playbooks + core system updates

terraform - engine for deploy nomad jobs (docker containers. VMs. execs or something else)

Vault - K/V storage. I save here secrets for containers and VMs

consul - service networking solution if you need realy hard network layer

As a result, I'm not really sure if it's a simple level or a complex one, but it's very flexible and convenient for me.

UPD2: As a result, I described the applications level, but in fact it is 1 very thick server on AMD Epic with archlinux. XD By the way, the lemmy node from which I write is just on it. =) And yes, it's still selfhosted.

[-] one_knight_scripting@lemmy.world 1 points 2 months ago

Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.

Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I've set it up for myself in my lab accessible only over VPN.

What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I'm actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.

Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.

Let's compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let's just say that it gets very very large if you need it to. Only it's free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.

Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven't tested it.

I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.

Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.

Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. http://rohityadav.cloud/blog/cloudstack-kvm/

[-] gravitywell@sh.itjust.works 1 points 2 months ago

I'm pretty happy with Debian as my server's OS. I recently gave in to temptation and switched from stable to testing, on my home systems I run Arch because i like to have the most up to date stuff, but with my servers that's a bit less important, even so debian testing is usually pretty stable itself anyway so I'm not worried much about things breaking because of it.

[-] bluGill@fedia.io 1 points 2 months ago

Truenas core because I'm a bsd guy at heart. with that all but dead I'm trying to decide between bare freebsd or xigmanas.

I have a arch linux box for things that don't run on bsd.

load more comments
view more: next ›
this post was submitted on 06 Aug 2025
10 points (100.0% liked)

Selfhosted

52241 readers
118 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS