18
submitted 8 months ago* (last edited 8 months ago) by Guenther_Amanita@feddit.de to c/selfhosted@lemmy.world

That's a question I always asked myself.
Currently, I'm running Debian on both my servers, but I consider switching to Fedora Atomic Core (CoreOS), since I already use Fedora Atomic on my desktop and feel very comfortable with it.

There's always the mentality of using a "stable" host OS bein better due to following reasons:

  • Things not changing means less maintenance, and nothing will break compatibility all of the sudden.
  • Less chance to break.
  • Services are up to date anyway, since they are usually containerized (e.g. Docker).
  • And, for Debian especially, there's one of the biggest availability of services and documentation, since it's THE server OS.

My question is, how much of these pro-arguments will I loose when I switch to something less stable (more regular updates), in my case, Fedora Atomic?


My pro-arguments in general for it would be:

  • The host OS image is very minimal, and I think most core packages should be running very reliably. And, in the worst case, if something breaks, I can always roll back. Even the, in comparison to the server image, "bloated" desktop OS (Silverblue) had been running extremely reliably and pretty much bug free in the past.
  • I can always use Podman/ Toolbx for example for running services that were made for Debian, and for everything else there's Docker and more. So, the software availability shouldn't be an issue.
  • I feel relatively comfortable using containers, and think especially the security benefits sound promising.

Cons:

  • I don't have much experience. Everything I do related to my servers, e.g. getting a new service running, troubleshooting, etc., is hard for me.
  • Because of that, I often don't have "workarounds" (e.g. using Toolbx instead of installing something on the host directly) in my mind, due to the lack of experience.
  • Distros other than Debian and some others aren't the standard, and therefore, documentation and availability isn't as good.
  • Containerization adds another layer of abstraction. For example, if my webcam doesn't work, is it because of a missing driver, Docker, the service, the cable not being plugged in, or something entirely different? Troubleshooting would get harder that way.

On my "proper" server I mainly use Nextcloud, installed as Docker image.
My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.

With my "proper" server, I'm not really unhappy with Debian. It works and the server is running 24/7. I don't plan to change it for the time being.

Regarding the Raspi especially, it looks quite a bit different. I think I will just try it and see if I like it.

Why?

  • It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something. This is actually pretty good, since the OS needs to reboot to apply updates, and it updates itself automatically, so I don't have to SSH into it from time to time, reducing maintenence.
  • And, last but not least, I've lost my password. I can't log in anymore and am not able to update anymore, so I have to reinstall anyway.

What is your opinion about that?

top 13 comments
sorted by: hot top controversial new old
[-] nottelling@lemmy.world 17 points 8 months ago* (last edited 8 months ago)

If you are in a position to ask this question, it means you have no actual uptime requirements, and the question is largely irrelevant. However, in the "real" world where seconds of downtime matter:

Things not changing means less maintenance, and nothing will break compatibility all of the sudden.

This is a bit of a misconception. You have just as many maintenance cycles (e.g. "Patch Tuesdays") because packages constantly need security updates. What it actually means is fewer, better documented changes with maintenance cycles. This makes it easier and faster to determine what's likely to break before you even enter your testing cycle.

Less chance to break.*

Sort of. Security changes frequently break running software, especially 3rd party software that just happened to need a certain security flaw or out-of-date library to function. The world has got much better about this, but it's still a huge headache.

Services are up to date anyway, since they are usually containerized (e.g. Docker).

Assuming that the containerized software doesn't need maintenance is a great way to run broken, insecure containers. Containerization helps to limit attack surfaces and outage impacts, but it isn't inherently more secure. The biggest benefit of containerization is the abstraction of software maintenance from OS maintenance. It's a lot of what makes Dev(Sec)Ops really valuable.

Edit since it's on my mind: Containers are great, but amateurs always seem to forget they're all sharing the host kernel. One container causing a kernel panic, or hosing misconfigured SHM settings can take down the entire host. Virtual machines are much, much safer in this regard, but have their own downsides.

And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.

No it isn't. THE server OS is the one that fits your specific use-case best. For us self-hosted types, sure, we use Debian a lot. Maybe. For critical software applications, organizations want a vendor so support them, if for no other reason than to offload liability when something goes wrong.

It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something.

This isn't a server. It's a printing appliance. You're going to have a similar experience of needing updates with every power-on, but with CoreOS, you're going to have many more updates. When something breaks, you're going to have a much longer list of things to track down as the culprit.

And, last but not least, I’ve lost my password.

JFC uptime and stability isn't your problem. You also very probably don't need to wipe the OS to recover a password.

My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.

That is the answer to your question. You're running this RPi as a "server" for your 3d printing. If you want your printing to work reliably, then do what Octoprint recommends.

What it sounds like is you're curious about CoreOS and how to run other distributions. Since breakage is basically a minor inconvenience for you, have at it. Unstable distros are great learning experiences and will keep you up to date on modern software better than "safer" things like Debian Stable. Once you get it doing what you want, it'll usually keep doing that. Until it doesn't, and then learning how to fix it is another great way to get smarter about running computers.

E: Reformatting

[-] False@lemmy.world 6 points 8 months ago* (last edited 8 months ago)

Sometimes I think this community should be called homelab instead of selfhosted based on the kinds of questions

[-] AtariDump@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

Maybe that’s the sub I should be subscribed to.

[-] fuzzy_feeling@programming.dev 1 points 8 months ago
[-] CommunityLinkFixer@lemmings.world 3 points 8 months ago

Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn't work well for people on different instances. Try fixing it like this: !homelab@lemmy.ml

[-] AtariDump@lemmy.world 0 points 8 months ago

Thanks! See you all later!

[-] False@lemmy.world 5 points 8 months ago

What's the cost and impact of downtime for you? If you're doing this for personal use it's probably minimal for both so doesn't really matter. If you want to try the new thing and you're not afraid of the time investment or potential downtime then go for it

[-] herrcaptain@lemmy.ca 5 points 8 months ago

I don't have any experience with CoreOS so can't help you on that front. That said, it sounds like the server in question isn't mission-critical in the first place and you seem to have come up with a good argument for trying it out. Why not give it a go and see how it works out?

[-] Painfinity@lemmy.dbzer0.com 5 points 8 months ago

I don't know anything about what you just asked but man, if there's such a thing as a well formatted post, then this is it!

[-] avidamoeba@lemmy.ca 4 points 8 months ago* (last edited 8 months ago)

So you've listed some important cons. I don't see the why outweighing those cons. If the why is "I really wanna play with this." then perhaps that outweighs the cons.

BTW on production servers we often don't do updates at all. That's because updates could break, beyond what's expected. Instead we apply updates on the base OS in a preproduction environment, then we build an image out of it, test it and send that image to the data centers where our production servers are. Test it some more in a staging environment. Then the update becomes - spin up new VMs in the production environment from the new image and destroy the old VMs.

[-] nottelling@lemmy.world 4 points 8 months ago

Yup. Treating VMs similar to containers. The alternative, older-school method is cold snapshots of the VM, apply patches/updates (after pre-prod testing & validation), usually in an A/B or red/green phased rollout, and roll back snaps when things go tits up.

[-] Pantherina@feddit.de 2 points 8 months ago

Podman runs without a daemon which for some reason makes podman compose an a bit tricky replacement for docker compose.

But for a single purpose, why not just install nextcloud as a system package via layering? I think that should be pretty secure through SELinux and would be the easiest choice.

Other problems with coreOS:

  • ignite file make monkey brain confusion
  • updates always require a reboot unlike on Debian, where only kernel updates need that (downtime is minimal and can be automated using a systemd service)

its not that hard

pkexec cat /etc/systemd/system/nightly-reboot.service <<EOF
[Unit]
Description=Update rpm-ostree and reboot
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/bin/rpm-ostree --reboot update

[Timer]
OnCalendar=daily
AccuracySec=1h
Persistent=true
Unit=rpm-ostree-update.service

[Install]
WantedBy=multi-user.target
EOF

But I would honestly try it. Maybe give secureblue server a try, should be more similar to your desktop than coreOS (which seems to be made for wide deployments)

[-] fedorafan@iusearchlinux.fyi 1 points 8 months ago

Staying on top of updates is one of the most effective ways to keep your stuff secure and really should be done regardless of your setup. Updates have the downside of sometimes causing systems and applications to break. I think the question is what frequency do you want to update your applications.

I have been very happy with FCOS and really view it as building a declarative appliance. You can install it straight from an iso and configure it manually similar to Debian. But I really like the butane / ignition method for defining everything about it. Sort of like a more robust cloud init on the Debian side. I typically define this in a ~~terraform~~ openTofu project and then transpile it to my hypervisor as a vm so I can just keep fine tuning my config until I have it just right. I set weekly auto updates typically and for the most part rarely touch FCOS vms once they are working.

this post was submitted on 23 Feb 2024
18 points (87.5% liked)

Selfhosted

39677 readers
734 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS