view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Quite the opposite. Use drives from as many different manufacturers as you can, especially when buying them at the same time. You want to avoid similar lifecycles and similar potential fabrication defects as much as possible, because those things increase the likelihood that they will fall close to each other - particularly with the stress of rebuilding the first one that failed.
To the best of my knowledge, this "drives from the same batch fail at around the same time" folk wisdom has never been demonstrated in statistical studies. But, I mean, mixing drive models is certainly not going to do any harm.
It may, performance-wise, but usually not enough to matter for a small self-hosting servers.
I wouldn't mix 5400 rpm drives with 7200 rpm drives, but if the rpm & sizes are the same, there won't be any measurable performance loss.
If everything went fine during production you're probably right. But there have definitely been batches of hard disks with production flaws which caused all drives from that batch to fail in a similar way.
I know it's only what I've experienced but I've been on a 2 weeks of hell from emc drives failing at the same time because dell didn't change up serials. Had 20 raid drives all start failing within a few days of each other and all were consecutive serials.
If I had a dollar for every time rebuilding a RAID array after one failed drive caused a second drive failure in the array in less than 24 hours.... I'd probably buy groceries for a week.
When using drives from the same model and batch?
Yup. Same age, same design, same failures... and array rebuilds are super intense workloads that often force a lot of random reads and run the drive at 100% load for many hours.
I've heard just in general. The resilvering process is hard on all the remaining drives for an extended period of time.
So you're saying I should be running RAIDz2 instead of RAIDz1? You're probably right. 😂
I made that switch a few years ago for that reason.
That said, as the saying goes, RAID is not a backup, it should never be the thing that stands between you having and losing all your data. RAID is effectively just one really dependable hard drive, but it's still a single point of failure.
So you're saying I should be running JBOD with backups instead of RAIDz1? You're probably right. 🤭
As long as you're ok with it being way less dependable, and having to rebuild it from scratch more often 😉.
I don't know if you're talking about the sample of cases you've personally witnessed, or the population of all NASes in the world. If the former, that sounds significant. If the latter, it sounds like it's probably not something to worry about.