view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
If there's an offline backup, they could create a degraded RAIDz1 with the 2 12T disks, copy the data from the 6Ts over, create the 12T linear volume out of the 6Ts, add it to the degraded RAIDz1 and wait for it to resilver. If no hardware fails and they don't punch in a wrong keystroke, it should work.
This is true. I'm using 8 external USB drives in two RAIDz1s and I had to ensure their controllers don't overheat. For example I've had 4 WD Elements standing vertically, stacked next to each other. The inner two's controllers would overheat during initial data transfer and disconnect. Spacing then apart resolved this for my ambient environment. In the other pool, I had a new WD Elements overheat on its own, without taking ambient heat. I resolved that by adhering a small heatsink to the SATA-USB controller in the enclosure. I also drilled a hole in the enclosure immediately above the heatsink for better ventilation. I later applied this mod to the of the drives of the same model.
Crucially however between the issues above and accidental cable unplugging, ZFS hasn't lost any data or caused any undue headache. If anything, getting back to a working state has been easier in some occasions as it would automatically detect a missing drive is back, resilver if needed and go its merry way. The headache I've observed most of the time has been of the sort - a message that zpool is not healthy, a drive has shown errors and/or missing, resolve drive issues if any, reconnect drive, no affected applications, no downtime. The much less often observed issue, probably twice over the last 5 years has been of the sort - applications are down, zpool isn't reading/writing or is missing, more than one drives is disconnected due to a cable snafu, shutdown, reconnect drives, boot, ZFS detects the drives and it proceeds as if nothing happened. All in, the occasions on which I had to manipulate ZFS over that last 5 years is around 5, most during the initial data transfer load. The previous LVM + mdraid setup I had required more work to get back in shape after a drive was kicked out for one reason or another. So yes USB can definitely present issues that you wouldn't see in an internal application, especially if some of your USB enclosure controllers are shit, but in my anecdotal experience, ZFS is very capable in handling those gracefully and with less manual intervention than the standard Linux solutions. If anything, ZFS has been less sensitive to hardware problems.
I feel like what you're saying here, in effect, is "USB connected drives in a RAID are a bad idea, but if you're going to do it, ZFS is the way to go."
Hahaha. Good one!
Well not quite. More like "USB connected drives in RAID could be less reliable than internal and software can deal with it. ZFS makes that easier than LVM+mdraid." The downside of LVM+mdraid in my experience is that it needs more commands typed in to repair an array if something's gone wrong. It probably doesn't break much more than ZFS would under the same hardware conditions and it probably can recover from the same conditions ZFS could. USB drives can present more failure modes than internal but one of the points of RAID is to mitigate hardware failures. So I'm considering USB drives as just shittier drivers whose shittiness the software should be able to hide. So far that has been borne out in practice in my anecdata. I've used both LVMRAID (LVM+built-in mdraid) and ZFS with questionable USB drives and both have handled them without data loss and rare downtime, less than once a year. ZFS requires less attention. With all of that said ZFS does of course provide data integrity checking and correction which is a significant plus over LVM+mdraid. It's already saved me from data corruption due to RAM I had no idea had a problem. RAM that passed Memtest86+'s first pass. Little did I know that it fails on subsequent passes... Yes the first and subsequent passes are different. So I'd use ZFS with USB or internal disks whenever I have the choice to. 😂