417
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 28 Dec 2023
417 points (99.1% liked)
PC Gaming
8607 readers
655 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 1 year ago
MODERATORS
PC gaming on Microsoft Window's is Xbox gaming. It's baked into the OS and we're a generation away from MS charging is you want a "secure" OS.
Linux + Valve means PC gaming won't be behind a paywall anytime soon.
"Linux" already charges for a "secure" OS. RHEL is the quintessential example and Canonical have their enterprise oriented Ubuntu variant. And smaller orgs have other offerings. Likely, we would see the same happen with windows... and already sort of do with the professional versus home SKUs that nobody understands.
PC gaming is highly unlikely to be "behind a paywall" basically ever because there is too much money in it. But, speculation, Valve's increasingly strong push toward Linux is a mix of three things
I like Valve and love Steam. But it is important to remember that they are "a company" first and foremost.
There’s nothing in RHEL or enterprise Ubuntu that’s inherently more secure than any other distro.
I am more familiar with RHEL than Ubuntu (I still can't grok what the hell they advertise when you try to update home ubuntu...). But you are generally paying for a more curated selection of packages in the default repositories as well as active support for the more "bleeding edge" stuff.
Which DOES provide "security". Both in the sense of having more vetted third party packages (rather than do your own research on which solution to use, you use the one that the people you threw money at decided on for you) but also in response time. Because if someone manages to sneak malware into a popular package, you don't just have people on call to roll that back and implement mitigations/recoveries immediately. They are also on call to call you to say "Yo, gimp is gonna shove bitcoin mining goatse into every single picture you make. We suggest you do the following..." at 2 am.
Let's be honest, you're paying for enterprise support. It ticks the squares in your report and makes management happy - and there's nothing wrong with it as that will save your ass sometimes too.
You'd get the same experience with any rhel clone otherwise (old centos, rocky) or even completely another distro like debian.
I'll be honest I'm not that familiar with Ubuntu either. I do have pretty extensive experience with RHEL (though mostly through CentOS back when it was effectively a RHEL clone) and even more with Debian (upon which Ubuntu is based).
You seem to be implying that having fewer packages in the default repos somehow increase security. I don't buy that. Packages that are not installed on the base system are fully optional (and even some that are, if you're willing to do some cleanup!). Not having them installed doesn't decrease your attack vectors. Having them in the repos means they're going through the distro's security process, patching, etc.
Should the user choose to install that piece of software (otherwise it doesn't matter), that process should mean increased security vs. the alternative - installing those packages either from upstream or from a third-party. Either solution may have on-par security practices with the distro's but more likely have worse. Furthermore, upgrades could become more perilous for essentially 2 reasons:
apt upgrade
or similar).stable
or Ubuntu LTS where security fixes keep coming for years.Surely the mass of independent security researchers are more likely to find and file CVEs than the limited staff at Red Hat who probably have better things to worry about. On top of that, whatever CVEs RH do find, they will likely submit to the CVE database so it doesn't matter.
That sounds like a nightmare scenario (almost literally!). Please don't wake me up we're bleeding money, reputation or potential revenue. Everything else can wait until next morning. My sleep can't.
Not having a package installed DOES decrease your potential attack vectors. But it is more about decreasing the burden of picking a solution. For example, let's say you are setting up a kubernetes install and need to pick an ingress controller. You can read through the documentation and maybe even check various message boards to figure out which are good options. But you need to sift through the FUD and often end up at the point of needing an expert to make an informed decision.
Or you can rely on the company you are paying to have already done that and likely have already contracted this out to an expert to figure out which solutions are well maintained and have solid update policies.
Because, getting back to a CVE: Some software has a policy of backporting security fixes to the current LTS (or even a few of the previous ones). Others will just tell you to upgrade to the latest version.. which can be a huge problem if you were holding on 3.9 until 4.x became stable enough to support the massive API changes. A "properly" curated package repository not only prioritizes the former but does so at every level so that you don't find out you were dependent on some random piece of software by a kid who decided he is going to delete everything and fuck over half the internet (good times).
And yes, you can go a long way by reading the bulletins by the various security researchers. But that is increasingly a full time job that requires a very specialized background.
Given infinite money and infinite time? Sure, hire your own team of specialists in every capacity you need. Given the reality, you look for a "secure"/"enterprise" OS where you can outsource that and pay a fraction of the price.
As for the 2 am wake up call: If you have global customers then "wait until next morning" might mean a full work day where they are completely vulnerable and getting hammered and deciding that every single loss is your fault because you couldn't maintain a piece of software. Or if you have sensitive enough customers/data where a sufficiently bad breach is the company itself (and an investigation to see who is at fault).
Which all gets back down to why this is a non-issue for consumers. Enterprise OSes already exist and are not some evil scheme MS are working toward. And the vast majority of even companies don't need them (but really should run them and consider paying for the support package on top...). So there is absolutely zero reason that the "home" version would ever be locked away behind one.
This is what you get when you pay.
Security backports to old versions of software that have fallen out of support.
Or, you know… you can get it for free with Debian, which circles back to my initial argument.
Well, no.
You can get it for free with Debian, or even Ubuntu on an LTS version. Just not forever.
The reason enterprises want to pay money for extended long-term support is so they don't have to keep jumping major versions (with the possibility of breaking whatever unique environment they had going) every couple of years.
Even the Linux kernel itself scaled back how far back they're willing to support, leaving long-term users with the work of sourcing backports or constantly testing out new features.
I'm very comfortable running Sid at home, but there the annoyance is limited to one person if I have to spend a couple hours combing through git diffs.
Ten years between OS refreshes is money.