30
submitted 5 months ago* (last edited 5 months ago) by tubbadu@lemmy.kde.social to c/selfhosted@lemmy.world

Hello! I was wondering if running periodically a script to automatically pull new images for all my containers is a good or a bad idea. I'd run it everyday at 5.00AM to avoid interruptions. Any tips?

EDIT: Thanks to everyone for the help! I'll install Watchtower to manage the updates

top 25 comments
sorted by: hot top controversial new old
[-] SteadyGoLucky@sh.itjust.works 28 points 5 months ago

Some apps have breaking changes. If you can restore a complete backup when that occurs, you can recover. Immich is famous for its breaking changes

[-] peregus@lemmy.world 9 points 5 months ago

But from the moment that the script updates and breaks something and the moment he realizes it may be too late for some applications.

For example I host Traccar to track car/vans and in this case some tracks would be lost. Or maybe SyncThing, he may realize days/weeks later that a sync is not working and if he was synching his smartphone pictures with his server and the smartphone is lost/broke/stolen, he may lose days/weeks or even months of pictures.

I wouldn't trust a script. Use Watchtower or What's up Docker

@tubbadu@lemmy.kde.social

[-] tubbadu@lemmy.kde.social 1 points 5 months ago

I'll surely check them out, thank you very much!

[-] tritonium@midwest.social 1 points 5 months ago* (last edited 5 months ago)

That's why you you bind mount all the important data and back it up with a proper backup solution like borg. And why you also have a monitoring and notification system that alerts you if a service goes down. I will get a telegram message within 15 minutes of a service going down.

[-] peregus@lemmy.world 1 points 5 months ago* (last edited 5 months ago)

I do bind mount data folders of the containers, I do backups, I have a notification system that alerts me if a container is not up, but a container can be up but have problems and, most importantly, I (and I guess a lot of other people) don't always have time to solve problems. When I a few spare minutes a do a snapshot, I update the containers and if something goes wrong if I have time I troubleshoot it, otherwise I just roll back the snapshot and I'll have a look at the problem when I'll have time.

[-] ShortN0te@lemmy.ml 5 points 5 months ago

Yes because immich is still not considered stable. Keep that in mind.

[-] DARbarian@kbin.run 17 points 5 months ago

Why not just let Watchtower do it for you?

[-] tubbadu@lemmy.kde.social 14 points 5 months ago

Because I was today years old when I found out this beautiful piece of software exist :D

thank you very much!

[-] paris@lemmy.blahaj.zone 2 points 5 months ago

I use Watchtower and haven't had any major issues in the two(?) years I've been using it. Make sure you use persistent volumes for your containers and make sure you back up those volumes. If anything breaks, you can roll back to before the update.

If you don't use persistent volumes, you'll lose data when Watchtower takes down the image and replaces it with the newer one (which doesn't copy over ephemeral volumes).

I also recommend for database containers to use an image tag that won't update with breaking changes. Don't use postgres:latest, use postgres:15.2 or something like that (whatever the image you're using the database for recommends).

[-] haui_lemmy@lemmy.giftedmc.com 3 points 5 months ago

Pretty solid advice.

One could argue though that a backup script could pull the particular container image right after doing the backup to have maximum coverage.

If someone is already that adept at scripting to rely on a script to do automatic backups, they can very well pull the new images and clean old ones.

I‘m one of those who have a backup script and still use watchtower.

[-] wjs018@lemmy.world 8 points 5 months ago

I used to have my docker updates done automatically. However, as the services I used to run just for myself have started to be used by other people (family, friends), I am less tolerant of having things break. So, instead of something like watchtower, I run diun these days. I have it set up to ping me in a discord channel when a docker update is available. Then, I can actually perform the update when I have time and attention to troubleshoot any issues that may come up.

[-] earth_walker@lemmy.world 5 points 5 months ago

Agree, if you are running containers on a casual or "just for fun" basis then automatic updates are fine. But the more you or others depend on the service running, the more it makes sense to perform an update manually, when you have time to troubleshoot any problems that may arise. Or, even update on a test setup first to identify issues and then update on your production setup.

[-] atzanteol@sh.itjust.works 7 points 5 months ago

Depends on how you like to roll. If you enjoy waking up to a service not working then go for it.

But it very much depends on what containers you're using and what tags you're pulling.

[-] redxef@feddit.de 7 points 5 months ago

I get a summary once a week of all the updates. I then check the release notes and if nothing needs any changes just run the ansible playbook that updates to those releases. I don't want to get up and first thing in the morning read alert emails because an update failed over night, so i sit down for 10 minutes once a week.

[-] JASN_DE@lemmy.world 7 points 5 months ago

I run a mixed setup, many of the "less important" containers are on watchtower auto-update, the rest on notification (reverse proxy, Nextcloud, etc).

But I also have many of them on specific branches instead of "latest".

[-] ShortN0te@lemmy.ml 5 points 5 months ago

I recommend, reading the release changelogs actively. For most services you can just put the github release page in an RSS reader to get a notification when a new release hits, so you can quickly look for any breaking changes, also this will give you info about new features.

I have been using watchtower for a few years. No problems with auto updates so far, but keep your backup handy.

[-] BrightCandle@lemmy.world 5 points 5 months ago

It really depends on the project. Some of them take breaking changes seriously and don't do them and auto migrate and others will throw them out on "minor" number releases and there might be a lot of breaking changes but you only run into one that impacts you occasionally. I typically don't want containers that are going to be a lot of work to keep up to date so I jettison projects that have unreliable releases for whatever reason and if they put out a breaking change its a good time to re evaluate whether I want that container at all and look at alternatives.

So no its not safe, but depending on the project it actually can be.

[-] GravitySpoiled@lemmy.ml 4 points 5 months ago

That's what I do as well. Even with immich. It may break but it's usually just a simple change in the env file

[-] MangoPenguin@lemmy.blahaj.zone 4 points 5 months ago

No, but you should already have good backups in place (right??) so restoring if something breaks isn't too hard.

I have watchtower configured to update most, but not all containers.

It runs after the nightly backup of everything runs, so if something explodes, I've got a backup that's recent and revertible. I also don't update certain types of containers (databases, critical infrastructure, etc.) automatically so that the blast radius of a bad update when I'm not there doing it is limited.

In the last ~3 years I've had exactly zero instances of 'oops shit's fucked!', but I also don't run anything that's in a massive state of flux and constantly having breaking changes (see: immich).

[-] Konraddo@lemmy.world 3 points 5 months ago

Depends on the application really. For example, I don't need to update Jellyfin and the arrs as soon as the new updates drop. They work just fine and I'm not waiting for any particular fixes.

[-] Oisteink@feddit.nl 3 points 5 months ago* (last edited 5 months ago)

Basically why i feel more comfortable with LXC than docker for my home lab services. It feels more like a VM in management.

We run a good mix of docker, vm’s and bare metal at work; no containers are auto-updated

[-] mhzawadi@lemmy.horwood.cloud 2 points 5 months ago

only if your happy that you could get a duff build and kill the service, I now watch with https://newreleases.io/ and update as needed

[-] catloaf@lemm.ee 1 points 5 months ago

I've been doing it for a few years and haven't had any issues. The risk/reward decision is yours.

[-] anzo@programming.dev 1 points 5 months ago

I'm using github.com/mag37/dockcheck for this, with its "-d N" argument. There's a tradeoff between stability and security, you need to decide for yourself. It will also depend on what services you're hosting. For example, nextcloud and immich would be disastrous under such a regime.

this post was submitted on 03 Jul 2024
30 points (94.1% liked)

Selfhosted

40717 readers
243 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS