60
submitted 3 weeks ago by qyron@sopuli.xyz to c/linux@lemmy.ml

While trying to move my computer to Debian, after allowing the installer to do it's task, my machine will not boot.

Instead, I get a long string of text, as follows:

Could not retrieve perf counters (-19)
ACPI Warning: SystemIO range 0x00000000000000B00-0x0000000000000B08 conflicts withOpRegion 0x0000000000000B00-0x0000000000000B0F (\GSA1.SMBI) /20250404/utaddress-204)
usb: port power management may beunreliable
sd 10:0:0:0: [sdc] No Caching mode page found
sd 10:0:0:0: [sdc] Assuming drive cache: write through
amdgpu 0000:08:00.0 amdgpu: [drm] Failed to setup vendor infoframe on connector HDMI-A-1: -22

And the system eventually collapses into a shell, that I do not know how to use. It returns:

Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
 - Check rootdelay= (did the system wait lomg enough?)
- Missing modules (cat /proc/modules; ls /dev)

Alert! /dev/sdb2 does not exist. Dropping to a shell!

The system has two disks mounted:

-- an SSD, with the EFI, root, var, tmp and swap partition, for speeding up the overall system -- an hdd, for /home

I had the system running on Mint until recently, so I know the system is sound, unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning. Under Debian, it booted once and then stopped booting all together.

The installation I made was from a daily image, as I am/was aiming to put my machine on the testing branch, in order to have some sort of a rolling distro.

If anyone can offer some advice, it would be very much appreciated.

top 50 comments
sorted by: hot top controversial new old
[-] okwhateverdude@lemmy.world 34 points 3 weeks ago

Sounds like your /etc/fstab is wrong. You should be using UUID based mounting rather than /dev/sdXY. Very likely you'll need to boot from a usb stick with a rescue image (the installer image should work), and fix up /etc/fstab using blkid

[-] qyron@sopuli.xyz 11 points 3 weeks ago

You made me think that perhaps the BIOS/EFI is fudging something up. I checked and I had four separate entries pointing towards the SSD.

[-] okwhateverdude@lemmy.world 20 points 3 weeks ago

When you do fix it, the internet would appreciate a follow up comment on what you did to fix the problem

[-] qyron@sopuli.xyz 14 points 3 weeks ago

I will. Don't know when, but I will.

load more comments (1 replies)
[-] kumi@feddit.online 1 points 3 weeks ago

This gives a little bit of credence to the theory of an old installation taking precedence.

  • Are there other EFI partitions around? Try booting explicitly from each one and see if you get different results

  • Are there old bootloaders or entries from no longer existing installations lingering around on yor EFI drive? Move them from a live env to a backup or just delete them if you are confident.

  • How about NVRAM? It's a way for the OS to configure boot straight to your mobo; separate from any disks attached. It doesn't look like it to me but perhaps it is possible your mobo is still trying to load stale OS from NVRAM config and your newest installation didnt touch it? Manually overriding boot in BIOS like above should root out this possibility.

[-] qyron@sopuli.xyz 1 points 3 weeks ago

I developed the habit of formatting my disks before a new install, so I'm going to push that hypothesis aside for now.

Before installing Debian I tried Sparky and I noticed it had set up a /boot_EFI and a /boot partition, which sounded off to me, so I wiped the SSD clean and manually partioned it, leaving only a 1GB /boot, configured for EFI.

NVRAM is not completely off the board but I find it odd to just flare up as an issue now, under Debian, and having no problems under Mint or Sparky.

[-] kumi@feddit.online 1 points 3 weeks ago* (last edited 3 weeks ago)

@qyron Just in case you didn't see my other reply which I think might be more relevant for you: https://feddit.online/post/1342935#comment_6604739

load more comments (1 replies)
[-] GNUmer@sopuli.xyz 12 points 3 weeks ago

Can you run lsblk within the emergency shell? Sounds a bit like the HDD has taken theplacde of /dev/sdb, upon which there's no second partition nor a root filesystem -> root device not found.

[-] qyron@sopuli.xyz 4 points 3 weeks ago* (last edited 3 weeks ago)

Perhaps? It fell into a busybox. How can I do what you are requesting?

[-] just_another_person@lemmy.world 11 points 3 weeks ago
  1. Boot into a LiveUSB of the same version of distro you tried to install
  2. View the drive mappings to see what they are detected as
  3. Ensure your newly created partitions can mount correctly
  4. Check /etc/fstab on your root drive (not the LveUSB filesystem) to ensure they match as the ones detected while in LiveUSB
  5. Try rebooting

Report changes here.

[-] doodoo_wizard@lemmy.ml 8 points 3 weeks ago

Since you dont know what’s happening you dont need to be fucking around with busybox. Boot back into your usb install environment (was it the live system or netinst?) and see how fstab looks. Pasting it would be silly but I bet you can take a picture with your phone and post it itt.

What you’re looking for is drives mounted by dynamic device identifiers as opposed to uuids.

Like the other user said, you never know how quick a drive will report itself to the uefi and drives with big cache like ssds can have hundreds of operations in their queue before “say hi to the nice motherboard”.

If it turns out that your fstab is all fucked up, use ls -al /dev/disk/by-uuid to show you what th uuids are and fix your fstab on the system then reboot.

[-] JamesBoeing737MAX@sopuli.xyz 6 points 3 weeks ago
[-] qyron@sopuli.xyz 7 points 3 weeks ago

Not exactly the aknowedgement I was aiming for but definetely the one I needed.

[-] pinball_wizard@lemmy.zip 4 points 3 weeks ago

Sorry for your headaches. The door prize is you get to tell this story - to the un-envy of peers - in the future.

[-] qyron@sopuli.xyz 4 points 3 weeks ago

Bragging rights of the bad kind.

[-] IsoKiero@sopuli.xyz 6 points 3 weeks ago

Do you happen to have any USB (or other) drives attached? Optical drive maybe? In the first text block kernel suggests it found 'sdc' device which, assuming you only have ssd and hdd plugged in and you haven't used other drives in the system, should not exist. It's likely your fstab is broken somehow, maybe a bug in daily image, but hard to tell for sure. Other possibility is that you still have remnants of Mint on EFI/whatever and it's causing issues, but assuming you wiped the drives during installation that's unlikely.

Busybox is pretty limited, so it might be better to start the system with a live-image on a USB and verify your /etc/fstab -file. It should look something like this (yours will have more lines, this is from a single-drive, single-partition host in my garage):

# / was on /dev/sda1 during installation
UUID=e93ec6c1-8326-470a-956c-468565c35af9 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=19f7f728-962f-413c-a637-2929450fbb09 none            swap    sw              0       0

If your fstab has things like /dev/sda1 instead of UUID it's fine, but those entries are likely pointing to wrong devices. My current drive is /dev/sde instead of comments on fstab mentioning /dev/sda. With the live-image running you can get all the drives from the system running 'lsblk' and from there (or running 'fdisk -l /dev/sdX' as root, replace sdX with actual device) you can find out which partition should be mounted to what. Then run 'blkid /dev/sdXN' (again, replace sdXN with sda1 or whatever you have) and you'll get UUID of that partition. Then edit fstab accordingly and reboot.

[-] kumi@feddit.online 1 points 3 weeks ago* (last edited 3 weeks ago)

Changing /etc/fstab only won't change anything if / can not be mounted. How would it pick up those changes? I think you are on the right track but missing the part with updating the initramfs.

https://feddit.online/post/1342935#comment_6604739

[-] IsoKiero@sopuli.xyz 1 points 3 weeks ago

Rootfs location is passed via kernel parameter, for example my grub.cfg has "set root='hd4,msdos1'". That's used by kernel and initramfs to locate the root filesystem and once 'actual' init process starts it already has access to rootfs and thus access to fstab. Initramfs update doesn't affect on this case, however verifying kernel boot parameters might be a good idea.

load more comments (3 replies)
[-] wickedrando@lemmy.ml 6 points 3 weeks ago

Can you reinstall? If possible, use the whole disk (no dual booting and bootloader to deal with).

[-] qyron@sopuli.xyz 5 points 3 weeks ago

I can, already done before coming here and I risk I'm going to do it again because people are telling me to do this and that and I'm feeling way over my head.

But not in the mood to quit. Yet.

I'm running a clean machine. No secondary OS. The only thing more "unusual" that I am doing is partitioning for different parts of the system to exist separately and putting /home on a disk all to itself.

[-] wickedrando@lemmy.ml 4 points 3 weeks ago* (last edited 3 weeks ago)

Ah, yes I saw all the comment suggestions and was hoping a fresh reinstall would work for you.

Did you format before reinstall? Definitely seems like fstab issue dropping you into initramfs that would need some manual intervention.

A format and fresh install should totally resolve this (assuming installation options and selections are sound).

What does ‘ls /dev/sd*’ ran from shell show you?​​​​​​​​​​​​​​​​

[-] IsoKiero@sopuli.xyz 3 points 3 weeks ago

Just in case you end up with reinstallation, I'd suggest using stable release for installation. Then, if you want, you can upgrade that to testing (and have all the fun that comes with it) pretty easily. But if you want something more like rolling release, Debian testing isn't really it as it updates in cycles just like the stable releases, it just has a bit newer (and potentially broken) versions until the current testing is frozen and eventually released as new stable and the cycle starts again. Sid (unstable) version is more like a rolling release, but that comes even more fun quirks than testing.

I've used all (stable/testing/unstable) as a daily driver at some point but today I don't care about rolling releases nor bleeding edge versions of packages, I don't have time nor interest anymore to tinker with my computers just for the sake of it. Things just need to work and stay out of my way and thus I'm running either Debian stable or Mint Debian edition. My gaming rig has Bazzite on it and it's been fine so far but it's pretty fresh installation so I can't really tell how it works in the long run.

load more comments (1 replies)
[-] pinball_wizard@lemmy.zip 2 points 3 weeks ago

Once time I've had two bad installs in a row, it was due to my install media.

Many install media tools have an image checker (check-sum) step, which is meant to prevent this.

But corrupt downloads and corrupt writes to the USB key can happen.

In my case, I think it turned out that my USB key was slowly dying.

If I recall, I got very unlucky that it behaved during the checksums, but didn't behave during the installs. (Or maybe I foolishly skipped a checksum step - I have been known to get impatient.)

I got a new USB key and then I was back on track.

[-] qyron@sopuli.xyz 4 points 3 weeks ago

I'm fairly confident at this point that the worst of my problems is to be found between the chair and the keyboard.

[-] siha@feddit.uk 1 points 3 weeks ago
load more comments (3 replies)
[-] moonpiedumplings@programming.dev 5 points 3 weeks ago

unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning

This happened to me. It still showed up in kde's partition manager (when I plugged the ssd into another computer), with the drive named as an error code.

[-] Eggymatrix@sh.itjust.works 4 points 3 weeks ago

Congrats, you found the only debian that breaks regularly: testing

You can file a bug report and then install something that does not require you to debug early boot issues, like debian 13 or if you really want a rolling release arch or tubleweed.

load more comments (2 replies)
[-] angband@lemmy.world 4 points 3 weeks ago
[-] Telorand@reddthat.com 4 points 3 weeks ago

I think everyone here has offered good advice, so I have nothing to add in that regard, but for the record, I fucked up a Debian bookworm install by doing a basic apt update && apt upgrade. The only "weird" software it had was Remmina, so I could remote into work; nothing particularly wild.

I recognize that Debian is supposed to be bulletproof, but I can offer commiseration that it can be just as fallible as any other base distro.

[-] qyron@sopuli.xyz 8 points 3 weeks ago

Debian is well known for its stability but it is also known for being tricky to handle when moving into the Testing branch and I did just that, by wanting to have a somewhat rolling distro with Debian.

I'm no power user. I know how to install my computer (which is a good deal more than most people), do some configurations and tinker a bit but situations like this throw me into uncharted territory. I'm willing to learn but it is tempting to just drop everything and go back to a more automated distro, I'll admit.

Debian is not to blame here. Nor Linux. Nor anyone. We're talking about free software in all the understandings of the word. Somewhere, somehow, an error is bound to happen. Something will fail, break or go wrong.

At least in Linux we know we can ask for help and eventually someone will lend a pointer, like here.

[-] IcyToes@sh.itjust.works 2 points 3 weeks ago

OpenSuse Tumbleweed is a great balance between stable and updates (rolling updates). Worth considering if Debian doesn't work out.

load more comments (1 replies)
[-] LeFantome@programming.dev 3 points 3 weeks ago

Nothing that uses apt is remotely bullet-proof. It has gotten better but it is hardly difficult to break.

pacman is hard to break. APK 3 is even harder. The new moss package manager is designed to be hard to break but time will tell. APK is the best at the moment IMHO. In my view, apt is one of the most fragile.

[-] data1701d@startrek.website 1 points 3 weeks ago

Eh, I disagree with you on Pacman. It could be possible I was doing something stupid, but I've had Arch VMs where I didn't open them for three months, and when I tried to update them I got a colossally messed up install.

I just made a new VM, as I really only need it when I need to make sure a package has the correct dependencies on Arch.

[-] LeFantome@programming.dev 1 points 3 weeks ago

I can almost guarantee that the problem you encountered was an outdated archlinux-keyring that meant you did not have the GPG keys to validate the packages you were trying to install. It is an annoying problem that happens way too often on Arch. Things are not actually screwed up but it really looks that way if you do not know what you are looking at. One line fix if you know what to do.

It was my biggest gripe when I used Arch. I did not run into it much as I updated often but it always struck me as a really major flaw.

load more comments (1 replies)
[-] FooBarrington@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago)

And that's why I immediately fell in love with immutable distros. While such problems are rare, they can and do happen. Immutable distros completely prevent them from happening.

[-] Telorand@reddthat.com 1 points 3 weeks ago

I love them, too. Ironically, I'm not currently running one, but that's more because I need a VPN client that I haven't been able to get working on immutable distros, but I'd use one if I that was solved

[-] FooBarrington@lemmy.world 1 points 3 weeks ago

Out of interest, which client is that?

load more comments (3 replies)
[-] qyron@sopuli.xyz 4 points 2 weeks ago* (last edited 2 weeks ago)

@mvirts@lemmy.world @kumi@feddit.online @wickedrando@lemmy.ml @IsoKiero@sopuli.xyz @angband@lemmy.world @doodoo_wizard@lemmy.ml

Update - 2026.01.12

After trying to follow all advices I was given and failling miserably, I caved in and reinstalled the entire system, this time using a Debian Stable Live Image.

The drives were there - sda and sbd - the SSD and the HDD, respectively. sda was partioned from 1 through 5, while sbd had one single partition. As I had set during the installation. No error here.

However, when trying to look into /etc/fstab, the file listed exactly nothing. Somehow, the file was never written. I could list the devices through ls /dev/sd* but when trying to mount any one of it, it returned the location was not listed under /etc/fstab. And I even tried to update the file, mannually, yet the non existence of the drives persisted.

Yes, as I write this from the freshly installed Debian, I am morbidly curious to go read the file now. See how much has changed.

Because at this point I understood I wouldn't be going anywhere with my attemps, I opted to do a full reinstall. And it was as I was, again, manually partitoning the disk to what I wanted that I found the previous instalation had created a strange thing.

While all partions had a simple sd* indicator, the partition that should have been / was instead named "Debian Forky" and was not configured as it shoud. It had no root flag. It was just a named partition in the disk.

I may be reading too much into this but most probably this simple quirk botched the entire installation. The system could not run what simply wasn't there and it could not find an sda2 if that sda2 was named as something completely different.

Lessons to be taken

I understood I wasn't clear enough of how experienced with Debian I was. I ran Debian for several years and, although not a power-user, I gained a lot of knowledge about managing my own system tinkering in Debian, something I lost when I moved towards more up-to-date distros, more user-friendly, but less powerful learning tools. And after this, I recognized I need that "demand" from the system to learn. So, I am glad I am back to Debian.

Thank you for all the help and I can only hope I can returned it some day.

[-] IsoKiero@sopuli.xyz 1 points 2 weeks ago

It wasn't for nothing, you got some learning out of the experience and a story to tell. Good luck with the new system, maybe hold upgrading that to testing for a while, there's plenty to break and fix even without extra quirks from non-stable distribution :)

Have fun and feel free to ask for help again, I and others will be around to share what we've learned on our journeys.

load more comments (1 replies)
[-] LeFantome@programming.dev 2 points 3 weeks ago* (last edited 3 weeks ago)

It could be that /dev/sdb2 really does not exist. Or it could be mapped to another name. It is more reliable to use UUiD as others have said.

What filesystem though? Another possibility is that the required kernel module is not being loaded and the drive cannot be mounted.

[-] qyron@sopuli.xyz 4 points 3 weeks ago

Ext4 on all partitions, except for swap space and the EFI partition, that autoconfigures the moment I set it as such.

At the moment, I'm tempted to just go back and do another reinstallation.

I haven't played around with manually doing anything besides setting up the size of the partitions. Maybe I left some flag to set or something. I don't know how to set disk identification scheme. Or I do, just don't realize it.

Human error is the largest probability at this point.

[-] kumi@feddit.online 2 points 3 weeks ago* (last edited 3 weeks ago)

OP, in case you still haven't given up I think I can fill in the gaps. You got a lot of advice somewhat in the right direction but no one telling you how to actually sort it out I think.

It's likely that your /dev/sdb2 is now either missing (bad drive or cable?) or showing up with a different name.

You want to update your fstab to refer to your root (and /boot and others) by UUID= instead of /dev/sdbX. It looks like you are not using full-disk encryption but if you are, there is /etc/crypttab for that.

First off, you actually have two /etc/fstabs to consider: One on your root filesystem and one embedded into the initramfs on your boot partition. It is the latter you need to update here since it happens earlier in the boot process and needed to mount the rootfs. It should be a copy of your rootfs /etc/fstab and gets automatically copied/synced when you update the initrams, either manually or on a kernel installation/upgrade.

So what you need to do to fix this:

  1. Identify partition UUIDs
  2. Update /etc/fstab
  3. Update initramfs (update-initramfs -ukall or reinstall kernel package)

You need to do this every time you do changes in fstab that need to be picked up in the earlier stages of the boot process. For mounting application or user data volume it's usually not necessary since the rootfs fstab also gets processed after the rootfs has been successfully mounted.

That step 3 is a conundrum when you can't boot!

Your two main options are a) boot from a live image, chroot into your system and fix and update the initramfs inside the chroot, or b) from inside the rescue shell, mount the drive manually to boot into your normal system and then sort it out so you don't have to do this on every reboot.

For a), I think the Debian wiki instructions are OK.

For b), from the busybox rescue shell I believe you probably won't have the lsblk or blkid like another person suggested. But hopefully you can ls -la /dev/disk/by-uuid /dev/sd* to see what your drives are currently named and then mount /dev/XXXX /newroot from there.

In your case I think b) might be the most straightforward but the live-chroot maneuver is a very useful tool that might come in handy again in other situations and will always work since you are not limited to what's available in the minimal rescue shell.

Good luck!

load more comments
view more: next ›
this post was submitted on 05 Jan 2026
60 points (98.4% liked)

Linux

57274 readers
612 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS