647
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 Jul 2023
647 points (95.6% liked)
Fediverse
28468 readers
163 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 2 years ago
MODERATORS
This is a concern, but luckily this isn't required. I set up hobbit.world to host my Tolkien related communities. It only costs $6 a month plus the $35/yr for the domain name to host a tiny instance like this. I don't need to depend on anyone but my hosting provider.
To be safe I should download backups once a month or so.
But the point is that for big communities that people put a lot of time into, there should be an instance for each one owned by one of the mods.
Edit: Meant to reply to the person concerned about the centralization of communities.
Please do it more often if you have users other than yourself. One backup on the same server is barely a backup at all.
Fair enough. I'll look into automating it using some sort of storage from another provider.
Backblaze is fairly cheap but can be slow to get data from.
Even just a cronjob or scheduled task to download the backups to a machine at another location would be a big improvement. Then you can do it far more often because it's automated.
But personally I like to have both a copy on a PC and a cloud backup, in addition to the server.
I'm using the easy Lemmy script to run the docker instance. How do I take a backup of a running docker instance.
The backups I've done so far are full shard backups. But I don't have a way to automate that.
The page here explains getting a database dump on a running instance (and how to restore): https://join-lemmy.org/docs/administration/backup_and_restore.html
Then just back up the other files in the volumes directory where Lemmy is installed (everything except postgres, which is what the database dump does).
The pictrs volume includes both the uploaded images and the image cache. I have no idea how to separate out the uploaded images so you don't have to back up the cache, I just back it all up.
this is the bash script I use to create backups
it creates very small zip files as a result so it's very efficient
I made a cron for it to run every 3 hours, like
I figured out how to do this with docker container, but that's not ideal for a script.
Using docker compose it just fails with: Service "postgres" is not running container #1
I can see lemmy-easy-deploy if I do: docker compose ls
The service name is postgres in the docker-compose.yml file. Any idea what the issue might be?
Where is this lemmy-easy-deploy? I haven't seen that before, maybe if I read how it works I can figure out what's wrong
https://github.com/ubergeek77/Lemmy-Easy-Deploy
I think you might just need to change the
cd
s to go into the correct directory where the activedocker-compose.yml
file is, which should be in the folder calledlive
Sadly, no. That's already where I was running it.