view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
No matter how you go about it, getting these drives set up to be reliable isn't going to be cheap. If you want to run without an enclosure, at the very least (and assuming you are running Linux) you are going to want something like LSI SAS cards with external ports, preferably a 4-port card (around $50-$100, each port will run four drives) that you can flash into IT mode. You will need matching splitter cables (3x $25 each). And most importantly you need a VERY solid power supply, preferably something with redundancy (probably $100 or more). These prices are based on used hardware from ebay, except for the cables, and you'll have to do some considerable research to learn how to flash the SAS cards, and which ones can be flashed.
Of course this is very bare-bones, you won't have a case to mount the drives in, and splitter cables from the power supply can be finicky, but with time and experience it can be made to work very well. My current NAS is capable of handling up to 32 external and 8 internal drives and I'm using 3D-printed drive cages with some cheap SATA2 backplanes to finally get a rock-solid setup. It takes a lot of work and experience to do things cheaply.
Why immediate jump to IT mode? Sure ZFS is great but running ZFS takes a decent chunk of RAM for cache.
What do you consider a fair amount? My current server has 64GB of ram but arc_summary says ZFS is only using 6.35GB on a system with three ZFS pools totaling over 105TB of storage under pretty much constant usage.