23
submitted 4 months ago* (last edited 4 months ago) by loboaureo@lemm.ee to c/selfhosted@lemmy.world

Hello,

I am going to upgrade my server, taking advantage of the fact that I am going to be able to put more hard disks, I wanted to take advantage of this to give a little more security (against loss) to my data.

Currently I have 2 hard drives in ext4 with information, and wanted to buy a third (same capacity all three) and place them in raid5, so that in the future, I can put more hard drives and increase the capacity.

Due to economic issues, right now I can only buy what would be the third disk, so it is impossible for me to back up the data I currently have.

The data itself is not valuable, in case any file gets corrupted, I could download it again, however there are enough teras (20) to make downloading everything a madness.

In principle I thought to put on this server (PC) a dietpi, a trimmed debian and maybe with mdadm make the raid. I have seen tutorials on how to do it (this for example https://ruan.dev/blog/2022/06/29/create-a-raid5-array-with-mdadm-on-linux ).

The question is, is there any way without having to format the hard drives with data?

Thank you and sorry for any mistakes I may make, English is not my mother language.

EDIT:

Thanks for yours answers!! I have several paths to investigate.

you are viewing a single comment's thread
view the rest of the comments
[-] malaknight@programming.dev 3 points 4 months ago* (last edited 4 months ago)

So I see a few problems with what you want, for a raid5 setup you will need at least four drives, since your information is striped against 3 and then the fourth is a parity drive. with 3 drives you have an incredibly high likelyhood of losing your parity drive.

To my knowledge, you will need to wipe the drives to put them in any kind of raid. Since striping is essentially making custom sections of blocks; I don't think mdadm is smart enough to also move data files as well.

I would really recommend holding off on your project till you can back up the information, and get a fourth drive. I know there is a lot of talks between raid5 and raid6, but for me I really prefer the peace of mind that raid6 gives.

Edit: seems like it is possible with at least raid 1:https://askubuntu.com/questions/1403691/how-can-i-create-mdadm-raid1-without-losing-data

[-] catloaf@lemm.ee 10 points 4 months ago

You can do RAID 5 with three disks. It's fine. Not ideal, but fine.

My biggest concern is what OP is using as a server. If these disks are attached via USB, they are not going to have reliable connections, and it's going to trigger frequent RAID rescans and resyncs any time one of the three disks drops out. And the extra load from that might cause even more drops.

[-] just_another_person@lemmy.world 4 points 4 months ago

I reread this a few times after seeing your comment, but still missing where USB was mentioned. Am I blind?

[-] catloaf@lemm.ee 2 points 4 months ago

They didn't say USB, but they did say dietpi. I've never played with a rpi, but I don't think they have SATA or SAS ports, only USB.

[-] just_another_person@lemmy.world 2 points 4 months ago

Ah, he said PC, so I just assumed he wanted the distribution on x86. I see where you're coming from though.

[-] loboaureo@lemm.ee 1 points 4 months ago

Yes, dietpi is main for SBC, but also has an iso for PCs, its and old computer with 6 sata ports

[-] neidu2@feddit.nl 5 points 4 months ago* (last edited 4 months ago)

Seconding this. For starters, when tempted to go for Raid5, go for Raid6 instead. I've had drives fail in Raid5, and in turn have a second failure during the increased I/O associated with replacing a failed drive.

And yes, setting up RAID wipes the drives. Is the data private? If not, a friendly datahoarder might help you out with temporary storage.

[-] BearOfaTime@lemm.ee 5 points 4 months ago

I run RAID5 on one device.... BUT only because it replicates data that's on 2 other local devices AND that data is backed up to a cloud storage.

And I still want it to be RAID 6.

[-] neidu2@feddit.nl 7 points 4 months ago* (last edited 4 months ago)

Story time!

In this one production cluster at work (1.2PB across four machines, 36 drives per machine) everything was Raid6, except ONE single volume on one of the machines that was incorrectly set up as Raid5. It wasn't that worrysome, as the data was also stored with redundancy across the machines in the storage cluster itself (a nice functionality of beegfs), but it annoyed the fuck out of me for the longest time.

There was some other minor deferred maintenance as well which necessitated a complete wipe, but there was no real opportunity to do this and rebuild that particular RAID volume properly until last spring before the system was shipped off to Singapore to be mobilized for a survey. I planned on getting it done before the system was shipped, so I backed up what little remained after almost clearing it all out, nuked the cluster, disassembled the raid5, and then started setting up everything from scratch. Piece of cake, right?

shit

That's when I learned how much time it actually takes to rebuild a volume of 12 disks, 10TB each. I let it run as long as I could before it had to be packed up. After half a year of slow shipping it finally arrived on the other side of the planet, so I booked my plane ticket and showed up a week before anyone else just so I could connect power and continue the reraiding before the rest of the crew showed up. Basically, pushing a few buttons, followed by a week of sitting at various cafes drinking beer. Once the reraid was done, reclustering was done in less than an hour, and restoring the folder structure backup was a few hours on top of that. Not the worst work trip I've had, except from some unexpected and unrelated hardware failures, but that's a story for another day.

Fun fact: While preparing the system for shipment here in Europe, I lost one of my Jabra bluetooth buds. I searched fucking everywhere for hours, but gave up on finding it. I found it half a year later in Singapore, on top of the server rack, surprised it hadn't even rolled down. It really speaks to how little these huge container ships roll.

[-] BearOfaTime@lemm.ee 2 points 4 months ago* (last edited 4 months ago)

Haha, everything about that story is awesome, right down to the lost and found Jabra ear bud (does Jabra exist any more? At one time their ear pieces were the best).

Yes, re-silvering takes fucking forever. Even with my little setups (a few TB), it can take a day or two to rebuild one drive in an array. One.

I can only imagine how long a PB array would take.

[-] neidu2@feddit.nl 2 points 4 months ago* (last edited 4 months ago)

Jabra still exists yes. I'm still using Jabra, although I'm using a pair that I bought after I thought that one earbud was gone forever. I still use the older ones, which was Jabra Elite 4, but only with my PC, as its battery took a hit after those 6 months at sea. I currently main Jabra Active 7 or something like that, and I quite like them. I noticed that the cover doesn't stay very attached after a few proper cleans, but nothing a drop of glue doesn't fix. What I really like about the ones I currently use is that they're supposedly built to withstand sweat while training. I don't work out, but it would seem that those who do sweat A LOT, as I can wear mine while showering without any issues.

As for resilvering, the RAIDs are only a small fraction each of the complete storage cluster. I don't remember their exact sizes, but each raid volume is 12 drives of 10TB each. Each machine has three of these volumes. Four machines total contributes all of its raid volumes to the storage cluster for 1.2PB of redundant storage (although I'm tempted to drop the beegfs redundancy, as we could use the extra space, and it's usually fairly hassle free to swap in a new server and move the drives over).

EDIT: I just realized that I have this Jabra confference call speaker attached to the laptop on which I'm currently typing. I mostly use it for discord while playing project zomboid with my friends, though. I run audio output elsewhere, as the jabra is mono only.

[-] chiisana@lemmy.chiisana.net 2 points 4 months ago

Fun story but I’m most impressed with the earbud part of the story. WOW. Absolutely amazing and unexpected.

[-] loboaureo@lemm.ee 1 points 4 months ago

If i goes to raid5 i lost one disk of space, to go to raid6 i have to lost 2 disks.

Its a pesonal proyect, and the motherboard has only 6satas, one of them used by the SO disk, and i want to be able of upgrade it in a future...

[-] just_another_person@lemmy.world 0 points 4 months ago

Wut...

I think you're missing the point of RAID here, possibly. Where's the reliability in this?

[-] malaknight@programming.dev 3 points 4 months ago

Not to speak for the person above you. But I believe they are saying they have 1 computer with a raid5 array, that backs up to two different local servers, and then at least 1 of those 3 servers backs up to a cloud provider.

If that is true then they are doing it correctly. It is highly recommended to follow a 3-2-1 storage solution, where you back up to a local backup and a cloud backup for redundancy.

[-] just_another_person@lemmy.world 2 points 4 months ago

Ahhh, makes sense. That kind of wrecked my brain for a moment.

[-] BearOfaTime@lemm.ee 2 points 4 months ago

Lol, sorry, I really tried to make it clear what I was doing, honest, I did! 😄

Yes, I have 3 local devices that replicate to each other, one is RAID5, (well, 2 are, but...not for long). And one of them also does backup to a cloud storage.

Not ideal, because 3 devices are colocated, but it's what I can do right now. I'm working on a backup solution to include friends and family locations (looking to replicate what Crashplan used to provide in their "backup to friends" solution).

[-] catloaf@lemm.ee 2 points 4 months ago

It's possible to convert drives to RAID in-place... but strongly discouraged.

Since OP will have a blank drive, they could play musical chairs by setting up a new RAID on the new empty drive, copy data from one drive, wipe that drive, grow the array, copy data from the third drive, wipe, grow... But that's going to take a long time, and you'll have to keep notes about where you are in the process, lest you forget which drive is which over the multiple days this will take.

this post was submitted on 18 Jun 2024
23 points (89.7% liked)

Selfhosted

40134 readers
291 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS