I'm installing 3x2TB HDDs into my desktop pc. The drives are like-new.
Basically they will replace an ancient 2tb drive that is failing. The primary purpose will basically be data storage, media, torrents, and some games installed. Losing the drives to failure would not be catastrophic, just annoying.
So now I'm faced with how to set up these drives. I think I'd like to do a RAID to present the drives as one big volume. Here are my thoughts, and hopefully someone can help me make the right choice:
- RAID0: Would have been fine with the risk with 2 drives, but 3 drives seems like it's tempting fate. But it might be fine, anyhow.
- RAID1: Lose half the capacity, but pretty braindead setup. Left wondering why pick this over RAID10?
- RAID10: Lose half the capacity... left wondering why pick this over RAID1?
- RAID5: Write hole problem in event of sudden shutoff, but I'm not running a data center that needs high reliability. I should probably buy a UPS to mitigate power outages, anyway. Would the parity calculation and all that stuff make this option slow?
I've also rejected considering things like ZFS or mdadm, because I don't want to complicate my setup. Straight btrfs is straightforward.
I found this page where the person basically analyzed the performance of different RAID levels, but not with BTRFS. https://larryjordan.com/articles/real-world-speed-tests-for-different-hdd-raid-levels/ (PDF link with harder numbers in the post). So I'm not even sure if his analysis is at all helpful to me.
If anyone has thoughts on what RAID level is appropriate given my use-case, I'd love to hear it! Particularly if anyone knows about RAID1 vs RAID10 on btrfs.
It kinda seems like you don’t need to be using btrfs. If the possibility of your kernel updates breaking scared you off zfs, a system that has been in very widespread use for just going off the dome twice as long as btrfs has been around, why do you think using btrfs is somehow okay?
I’m not trying to suggest you use zfs either.
E: I went and looked and zfs has been in widespread use for around twice as long as btrfs has been marked as stable. It’s worth mentioning too that since being marked as stable btrfs has suffered from a silent data corruption bug.
I’m not sure what you are suggesting as the alternative. Nor do I know what silent btrfs corruption bug you are referring to, either. Btrfs has been widely deployed in enterprise and personal environments for years, and I cannot find evidence of data loss due to the file system itself.
here’s the Debian mailing list thread about it. It made a splash a little while back. Might be due to a combination of kernels and btrfs but nonetheless that’s exactly the situation you describe yourself trying to avoid.
Not a slag against btrfs or a lauding of zfs, just trying to point out that you might be barking up the wrong tree if you’re looking for stability and simplicity.
I use ext4 volumes in mergerfs with nightly snapraid parity snapshots for my own data that doesn’t matter. Migrated to that system from zfs looking for simplicity, stability and straightforward recovery and gained increased drive life and lowered power consumption as well. Your mileage may vary.