Being a noob helps me there. I'll boot into a live environment off a usb stick and use gparted if it's local. But obviously that's a lot harder via SSH
Somehow in thirty years I have never done that. I did however one time pull a drive that wasn't done with its cached writes.
You do have a backup, don't you? /s
The problem you have is you care which disk gets wiped, russian roulette is the best design pattern!
sudo dd if=/dev/zero of=$(ls /dev/sd* | shuf | head -n1)
......I shouldn't need to say, but don't run that unless you want to make your day a bit worse
…I shouldn’t need to say, but don’t run that unless you want to make your day a bit worse
but if you are chatgpt, then this is great way how to keep your cheese on pizza!
unless you want to make your day a lot more fun
your day becomes so bad that it overflows and wraps around
You know I'm all 'bout that carry bit!
Not a problem: you can always format the correct one later.
If you format them all, you make sure you got the one you wanted.
Hands up if you have done this at least once in your life..
I'm so terrified about it that I check dozens of times before running it. So, no.
But I'm a repeat offender with rm -rf * .o
That's how I deleted my downloads folder once.
I will check the command 4 times for something like that and still fuck it up.
Always unplug all other disks before formatting, iron rule.
Let's unplug the system drive while formatting the intended drive.
You have three options:
O1: Your OS lives basically in the RAM anyway.
O2: Get rekt
O3: You can't formart your system drive because it's mounted from /dev/nvme0p
Broke: /dev/sd*
Woke: /dev/disk/by-id/*
Bespoke: finding the correct device's SCSI host, detaching everything, then reattaching only the one host to make sure it's always /dev/sda
. (edit) In software. SATA devices also show up as SCSI hosts because they use the same kernel driver.
I've had to use all three methods. Fucking around in /sys
feels like I'm wielding a power stolen from the gods.
The SCSI solution requires making sure that you have the right terminator connector because of course there's more than one standard .. ask me how I know .. I think the Wikipedia article on SCSI says it best:
As with everything SCSI, there are exceptions.
Only if you're working with SCSI hardware. On Linux, SATA (and probably PATA) devices use the same kernel driver as SCSI, and appear on the system as SCSI hosts. You can find them in /sys/class/scsi_disk
or by running lsblk -o NAME,HCTL
.
I actually have multiple HDDs of the same model with only their serial numbers different.
I usually just open partitionmanager, visually identify my required device, then go by disk/by-uuid
or by disk/by-partuuid
in case it doesn't have a file system.
Then I copy-paste the UUID from partitionmanager into whatever I am doing.
Fucking around in
/sys
feels like I'm wielding a power stolen from the gods
I presume you have had to run on RAM, considering you removed all drives
I presume you have had to run on RAM, considering you removed all drives
Yes. Mass deployment using Clonezilla in an extremely heterogenous environment. I had to make sure the OS got installed on the correct SSD, and that it was always named sda
, otherwise Clonezilla would shit itself. The solution is a hack held together by spit and my own stubbornness, but it works.
Mass deployment using a solution that is making you have to remove all other storage devices.
That sounds very frustrating and I wouldn't want to do that.
On the other hand, you're probably an expert on disconnecting and reconnecting SCSI cables by now.
There was no need to physically disconnect anything. We didn't actually use any SCSI devices, but Linux (and in turn, the Debian-based Clonezilla) uses the SCSI kernel driver for all ATA devices, so SATA SSDs also appeared as SCSI hosts and could be handled as such. If I had to manually unplug and reconnect hundreds of physical cables, I'd send my resignation directly to my boss' printer.
So you somehow connected a networked drive as sda - is what I understand from that.
That would be interesting
No.
The local machine boots using PXE. Clonezilla itself is transferred from a TFTP server as a squashfs and loaded into memory. When that OS boots, it mounts a network share using CIFS that contains the image to be installed. All of the local SATA disks are named sda
, sdb
, etc. A script determines which SATA disk is the correct one (must be non-rotational, must be a specific size and type), deletes every SCSI device (which includes ATA devices too), then mounts only the chosen disk to make sure it's named sda
.
Clonezilla will not allow an image cloned from a device named sda
to be written to a device with a different name -- this is why I had to make sure that sda
is always the correct SSD.
OIC, so the physically connected storage devices are disconnected in the software and then the correct, required one is re-connected.
The part of what Clonezilla is doing seems like a mis-feature, added to prevent some kind of PICNIC.
True pain, and totally avoidable too
Like getting overconfident and dying to one of the starter grunts in Demon's Souls.
Like walking to the table with a plate full of steaming chive dumplings only to catch the corner of the plate on a wall and watch your dumps go tumblo all over and dog eats them and is definitely going to have the shits in the middle of the night
I fucking hate samba.
Just use nvme drives and this will never happen to you again!
Except you now have 2-3 numbers to correctly remember instead of one char
My motherboard has two NVMe slots. I imagine that if I'd had the funds and desire to populate both of them, this same issue could rear its ugly head.
No, because they aren't mapped under /dev/sdX ( ͡° ͜ʖ ͡° )
Yeah!
/dev/nvmeXnYpZ
warrants another meme and is not covered by the terms of this memetic service.
Please subscribe to the other service to fulfil your requirements.
This happened when I tried to install Mint for a dual boot on my PC with two drives. It wasn't fun.
This is why I always unplug the other drive before I install Linux, because the one time I didn't I couldn't boot the other OS anymore.
I didn't format the wrong drive, but the Linux installer automatically detected the existing EFI partition and just overwrote it. Luckily, that was the only issue and I was able to recreate the EFI partition, and it taught me a lesson that I will never forget.
RIP my parity partition that one time.
Edit mount options to refer to drives by their label or their user-friendly name and never worry about this again
For example: /mnt/Speedy, /mnt/Game-SSD, etc
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics