16
submitted 1 week ago* (last edited 1 week ago) by NekoKoneko@lemmy.world to c/selfhosted@lemmy.world

I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I'm always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.

For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?

(Sorry if this standard scenario has been discussed - searching didn't turn up anything.)

top 29 comments
sorted by: hot top controversial new old
[-] dmention7@midwest.social 7 points 1 week ago

Personally I deal with it by prioritizing the data.

I have about the same total size Unraid NAS as you, but the vast majority is downloaded or ripped media that would be annoying to replace, but not disastrous.

My personal photos, videos and other documents which are irreplaceable only make up a few TB, which is pretty managable to maintain true local and cloud backups of.

Not sure if that helps at all in your situation.

[-] Burninator05@lemmy.world 1 points 1 week ago

I have data that I actually care about in RAIDZ1 array with a hot standby and it is syched to the cloud. The rest (the vast majority) is in a RAIDZ5. If I lose it, I "lose" it. Its recoverable if I decide I want it again.

[-] GenderNeutralBro@lemmy.sdf.org 4 points 1 week ago

You'll think I'm crazy, and you're not wrong, but: sneakernet.

Every time I run the numbers on cloud providers, I'm stuck with one conclusion: shit's expensive. Way more expensive than the cost of a few hard drives when calculated over the life expectancy of those drives.

So I use hard drives. I periodically copy everything to external, encrypted drives. Then I put those drives in a safe place off-site.

On top of that, I run much leaner and more frequent backups of more dynamic and important data. I offload those smaller backups to cloud services. Over the years I've picked up a number of lifetime cloud storage subscriptions from not-too-shady companies, mostly from Black Friday sales. I've already gotten my money's worth out of most of them and it doesn't look like they're going to fold anytime soon. There are a lot of shady companies out there so you should be skeptical when you see "lifetime" sales, but every now and then a legit deal pops up.

I will also confess that a lot of my data is not truly backed up at all. If it's something I could realistically recreate or redownload, I don't bother spending much of my own time and money backing it up unless it's, like, really really important to me. Yes, it will be a pain in the ass when shit eventually hits the fan. It's a calculated risk.

I am watching this thread with great interest, hoping to be swayed into something more modern and robust.

[-] irmadlad@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

That is old-old-school. It works tho. You have to be a bit scheduled about it, to encompass current and future important data. IIRC AWS created a 100 petabyte drive and a truck to haul it around to basically do the same thiing, just in much larger amounts.

[-] unit327@lemmy.zip 2 points 1 week ago* (last edited 1 week ago)

I use aws s3 deep archive storage class, $0.001 per GB per month. But your upload bandwidth really matters in this case, I only have a subset of the most important things backed up this way otherwise it would take months just to upload a single backup. Using rclone sync instead of just uploading the whole thing each time helps but you still have to get that first upload done somehow...

I have complicated system where:

  • borgmatic backups happen daily, locally
  • those backups are stored on a btrfs subvolume
  • a python script will make a read-only snapshot of that volume once a week
  • the snapshot is synced to s3 using rclone with --checksum --no-update-modtime
  • once the upload is complete the btrfs snapshot is deleted

I've also set up encryption in rclone so that all the data is encrypted an unreadable by aws.

[-] quick_snail@feddit.nl 1 points 1 week ago

Don't do this. It's a god damn nightmare to delete

[-] Cyber@feddit.uk 2 points 1 week ago

What's your recovery needs?

It's ok to take 6 months to backup to a cloud provider, but do you need all your data to be recovered in a short period of time? If so, cloud isn't the solution, you'd need a duplicate set of drives nearby (but not close enough for the same flood, fire, etc.

But, if you're ok waiting for the data to download again (and check the storage provider costs for that specific scenario), then your main factor is how much data changes after that initial 1st upload.

[-] billwashere@lemmy.world 2 points 1 week ago
[-] Cyber@feddit.uk 1 points 1 week ago

In a different location

[-] worhui@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Lto tape. But I only have 15tb

It quickly becomes cost effective when you actually need the data to be safe. Far easier to have off site backups. I have never had a problem , but I like to have offline backup. Most of the time my data is static. So I am only backing up projects files ans changes for the most part.

If you have 40+ tb of dynamic data I can’t help there.

Edit: I buy used drives that are usually 2 generations old, so I got lto-5 drives when lto 7 was new. The used drives may be less reliable but used drives can be 1/10th the price of the newest ones.

[-] quick_snail@feddit.nl 2 points 1 week ago

Tape or backblaze

For me, I only back up data I can't replace, which is a small subset of the capacity of my NAS. Personal data like photos, password manager databases, personal documents, etc. get locally encrypted, then synced to a cloud storage provider. I have my encryption keys stored in a location that's automatically synced to various personal devices and one off-site location maintained by a trusted party. I have the backups and encryption key sync configured to keep n old versions of the files (where the value of n depends on how critical the file is).

Incremental synchronization really keeps the bandwidth and storage costs down and the amount of data I am backing up makes file level backup a very reasonable option.

If I wanted to back up everything, I would set up a second system off-site and run backups over a secure tunnel.

[-] Shadow@lemmy.ca 2 points 1 week ago

I don't. Of my 120tb, I only care about the 4tb of personal data and I push that to a cloud backup. The rest can just be downloaded again.

[-] NekoKoneko@lemmy.world 0 points 1 week ago

Do you have logs or software that keeps track of what you need to redownload? A big stress for me with that method is remembering or keeping track of what is lost when I and software can't even see the filesystem anymore.

[-] Sibbo@sopuli.xyz 1 points 1 week ago

If you can't remember what you lost, did you really need it to begin with?

Unless it's personal memories of course.

[-] Onomatopoeia@lemmy.cafe 1 points 1 week ago* (last edited 1 week ago)

I can't remember the name of an excel spreadsheet I created years ago, which has continually matured with lots of changes. I often have to search for it of the many I have for different purposes.

Trusting your memory is a naive, amateur approach.

[-] tal@lemmy.today 1 points 1 week ago* (last edited 1 week ago)

I don't know of a pre-wrapped utility to do that, but assuming that this is a Linux system, here's a simple bash script that'd do it.

#!/bin/bash

# Set this.  Path to a new, not-yet-existing directory that will retain a copy of a list
# of your files.  You probably don't actually want this in /tmp, or
# it'll be wiped on reboot.

file_list_location=/tmp/storage-history

# Set this.  Path to location with files that you want to monitor.

path_to_monitor=path-to-monitor

# If the file list location doesn't yet exist, create it.
if [[ ! -d "$file_list_location" ]]; then
    mkdir "$file_list_location"
    git -C "$file_list_location" init
fi

# in case someone's checked out things at a different time
git -C "$file_list_location" checkout master
find "$path_to_monitor"|sort>"$file_list_location/files.txt"
git -C "$file_list_location" add "$file_list_location/files.txt"
git -C "$file_list_location" commit -m "Updated file list for $(date)"

That'll drop a text file at /tmp/storage-history/files.txt with a list of the files at that location, and create a git repo at /tmp/storage-history that will contain a history of that file.

When your drive array kerplodes or something, your files.txt file will probably become empty if the mount goes away, but you'll have a git repository containing a full history of your list of files, so you can go back to a list of the files there as they existed at any historical date.

Run that script nightly out of your crontab or something ($ crontab -e to edit your crontab).

As the script says, you need to choose a file_list_location (not /tmp, since that'll be wiped on reboot), and set path_to_monitor to wherever the tree of files is that you want to keep track of (like, /mnt/file_array or whatever).

You could save a bit of space by adding a line at the end to remove the current files.txt after generating the current git commit if you want. The next run will just regenerate files.txt anyway, and you can just use git to regenerate a copy of the file at for any historical day you want. If you're not familiar with git, $ git log to find the hashref for a given day, $ git checkout <hashref> to move where things were on that day.

EDIT: Moved the git checkout up.

[-] BakedCatboy@lemmy.ml 1 points 1 week ago

My *arrstack DBs are part of my backed up portion, so they'll remember what I have downloaded in my non-backed up portion.

[-] PieMePlenty@lemmy.world 1 points 1 week ago

Not all data is equal. I backup things i absolutely can not lose and yolo everything else. My love for this hobby does not extend to buying racks of hard drives.

[-] INeedMana@piefed.zip 1 points 1 week ago

I've been following this post since the first comment.

And I have just put together my own RAID1 1TB NAS. And I did not think that 1TB will serve me forever, more like "a good start".

But the numbers I've been seeing in here... you guys are nuts 😆

[-] trk@aussie.zone 1 points 1 week ago

I have a 120TB unraid server at home, and a 40TB unraid server at work. Both use 2 x parity disks.

The critical work stuff backs up to home, and the critical home stuff backs up to work.

The media is disposable.

Both servers then back up to Crashplan on separate accounts - work uses the Australian server on a business account, home used the US server on a personal account.

I figure I should be safe unless Australia and the US are nuked simultaneously.... At which point my data integrity is probably not the most pressing issue.

[-] sefra1@lemmy.zip 1 points 1 week ago

Well, first while raid is great, it's not a replacement for backups. Raid is mostly useful if uptime is imperative, but does not protect against user errors, software errors, fs corruption, ransomware or a power surge killing the entire array.

Since uptime isn't an issue on my home nas, instead of parity I simply have cold backups which (supposedly) I plug in from time to time to scrub the filesystems.

If a online drive dies I can simply restore it from backup and accept the downtime. For my anime I have simply one single backup, but or my most important files I have 2 backups just in case one fails. (Unfornately both onsite)

On the other hand, for a client of mine's server where uptime is imperative, in addiction to raid I have 2 automatic daily backups (which ideally one should be offsite but isn't, at least they are in different floors of the same building).

[-] Mister_Hangman@lemmy.world 1 points 1 week ago

Definitely following this

[-] Decronym@lemmy.decronym.xyz 1 points 1 week ago* (last edited 1 week ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage
VNC Virtual Network Computing for remote desktop access
VPN Virtual Private Network

[Thread #119 for this comm, first seen 26th Feb 2026, 15:51] [FAQ] [Full list] [Contact] [Source code]

[-] randombullet@programming.dev 1 points 1 week ago

I have 3 main NASes

78TB (52TB usable) hot storage. ZFS1

160TB (120TB) warm storage ZFS2

48TB (24TB) off site. ZFS mirror

I rsync every day from hot to off site.

And once a month I turn on my warm storage and sync it.

Warm and hot storage is at the same location.

Off site storage is with a family friend who I trust. Data isn't encrypted aside from in transit. That's something else I'd like to mess with later.

Core vital data is sprinkled around different continents with about 10TB. I have 2 nodes in 2 countries for vital data. These are with family.

I think I have 5 total servers.

Cost is a lot obviously, but pieced together over several years.

The world will end before my data gets destroyed.

[-] danielquinn@lemmy.ca 0 points 1 week ago

Honestly, I'd buy 6 external 20tb drives and make 2 copies of your data on it (3 drives each) and then leave them somewhere-safe-but-not-at-home. If you have friends or family able to store them, that'd do, but also a safety deposit box is good.

If you want to make frequent updates to your backups, you could patch them into a Raspberry Pi and put it on Tailscale, then just rsync changes every regularly. Of course means that wherever youre storing the backup needs room for such a setup.

I often wonder why there isn't a sort of collective backup sharing thing going on amongst self hosters. A sort of "I'll host your backups if you host mine" sort of thing. Better than paying a cloud provider at any rate.

[-] Joelk111@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

That NAS software company Linus (of Linus Tech Tips) funded has a feature for this planned I think.

An open-source standalone implementation would be dope as hell. Sure, it'd mean you'd need to double your NAS capacity (as you'd have to provide enough storage as you use), but that's way easier than building a second NAS and storing/maintaining it somewhere else or constantly paying for and managing a cloud backup.

[-] kaotic@lemmy.world 0 points 1 week ago

Backblaze offers unlimited data on a single computer, $99/year.

There might be some fine print that excludes your setup but might be worth investigating.

https://www.backblaze.com/cloud-backup/pricing

[-] Mister_Hangman@lemmy.world 1 points 1 week ago
this post was submitted on 26 Feb 2026
16 points (100.0% liked)

Selfhosted

57010 readers
64 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS