I hate all three. I understand some of the decisions but other ones are frustrating.
Let me explain what I used to do. What I used to do, is take advantage of the fact that firefox profiles are completely separate instances of firefox, each with their own settings and extensions. I would run my personal profile with highly aggressive and experimental settings, because I was ok with it crashing if it meant I learned interesting things. On the other hand, the profiles related to schoolwork and other more important tasks would be defaults, so they would be much more stable. I no longer consider this a necessary feature, but it was fun to play with.
The other big reason why I relied on the old profiles, is because they have separate cookies and whatnot, which is useful for when I want to have an account for each profile. Although Google happily lets you sign into multiple accounts from the same browser, Microsoft, Discord, and many other apps do not, and force you to sign out before signing in again.
But this is painful. Things never open in the profile I want them to by default, which is annoying. In theory, and I am considering doing this, the way to fix it is by creating app menu shortcuts for each profile, and then having them be the apps I select whenever I want to open a website link or file (with no default profile/app set, so I just select every time).
In addition to that, each profile had to have it's own mozilla account for syncing, which was annoying.
Containers seemed like a nice in between. I could use a single mozilla account for sync, but have seperate microsoft or other accounts on the same browser instance.
Except nope, they actually suck and don't work like that. I can't decide a window is dedicated to a container, so all tabs from xyz site will open in that container and give me that account. It constantly prompts me and it's painful and the UX for what I'm trying to do is miserable.
Containers seem designed more for isolating cookies between two different sites, rather than hiding instances of sites from themselves. Like the original version was a "facebook container", which would hide the facebook cookies from other sites, but I don't want that. I want to be able to log into multiple facebook accounts (hypothetically, I don't actually have a single facebook account but you get the idea).
The new profiles, if you've heard of them, somehow manage to combine the worst of both worlds. Firstly they are an entirely separate system and can't be managed by the second profile system. But they exist within a single one of the old profiles, meaning I can't do tricks with desktop shortcuts to make apps open in one profile or the other. But at the same time, despite existing within one profile, they each require seperate Mozilla accounts for sync.
I am very frustrated, but als resetting up my system so I am considering what to do. I am probably going to continue with profiles, but add app menu shortcuts for them.
Any better ideas?

The XZ backdoor, affected a lot less machines than you think. It did not affect:
The malicious code never made it into RHEL or Debian. Both of those distros have a model of freezing packages at a specific version. They then only push manually reviewed security updates, ignoring feature updates or bugfixes to the programs they are packaging. This ensures maximum stability for enterprise usecases, but the way that the changes are small and reviawable also causes them to dodge supply chain attacks like xz (it also enables these distros to have stable auto update features, which I will mention later). But those distros make up a HUGE family of enterprise Linux machines, that were simply untouched by this supply chain attack.
As for linux distros that don't integrate ssh with systemd or non systemd distros being affected, that was because the malware was inactive in those scenarios. Malicious code did make it there, but it didn't activate. I wonder if that was sloppiness on the part of the maker of the malware, or intentional, having it activate less frequently as a way of avoiding detection?
Regardless, comparing the XZ backdoor to the recent NPM and other programming language specific package manager supply chain attacks is a huge false analogy. They aren't comparable at all. Enterprise Linux distros have excellent supply chain security, whereas programming language package managers have basically none. To copy from another comment of mine about them:
Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.
Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.
So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.
And that’s why supply chain attacks are now a daily occurrence.
And then this:
Also drives me insane as well. It's a form of survivorship bias, where people only notice when automatic upgrades cause problems, but they completely ignore the way that automatic security upgrades prevent many issues. Nobody cares about some organization NOT getting ransomwared because their webserver was automatically patched. That doesn't make the news the way that auto upgrades breaking things does. To copy from yet another comment of mine
If your software updates between stable releases break, the root cause is the vendor, rather than auto updating. There exist many projects that manage to auto update without causing problems. For example, Debian doesn't even do features or bugfixes, but only updates apps with security patches for maximum compatibility.
Crowdstrike auto updating also had issues on Linux, even before the big windows bsod incident.
https://www.neowin.net/news/crowdstrike-broke-debian-and-rocky-linux-months-ago-but-no-one-noticed/
It's not the fault of the auto update process, but instead the lack of QA at crowdstrike. And it's the responsibility of the system administrators to vet their software vendors and ensure the models in use don't cause issues like this. Thousands of orgs were happily using Debian/Rocky/RHEL with autoupdates, because those distros have a model of minimal feature/bugfixes and only security patches, ensuring no fuss security auto updates for around a decade for each stable release that had already had it's software extensively tested. Stories of those breaking are few and far between.
I would rather pay attention to the success stories, than the failures. Because in a world without automatic security updates, millions of lazy organizations would be running vulnerable software unknowingly. This already happens, because not all software auto updates. But some is better than none and for all software to be vulnerable by default until a human manually touches it to update it is simply a nightmare to me.