It's common practice for PC games today to launch with Denuvo, a form of DRM designed to stop the spread of pirated copies of games, and it's also common practice for developers to remove Denuvo several months after launch as interest (and the risk of piracy) dwindles. Less common is a developer publicly announcing it's removing Denuvo from a game before it's even out, but that's the surprise Starbreeze pulled this Friday.
"Hello heisters, we want to inform you that Denuvo is no longer in Payday 3," the developer wrote in a post on Steam on Friday. That's pretty much the whole message—short and to the point, and seemingly a win on the good will front, with the Steam post racking up 524 thumbs up on Steam so far and another 10,000 or so on Twitter.
Payday 3 is less than a week away from its September 21 release, and Starbreeze is clearly looking to roll into the launch with an excited community behind it. Two months ago a thread on the r/paydaytheheist subreddit called out the inclusion of Denuvo and the responses were characteristically negative. This afternoon, one of the game's developers responded to that thread to highlight that Denuvo has been removed.
Denuvo has long had a reputation for hindering performance in games and bloating their executables, though the company behind it, Irdeto, insists that isn't the case. This summer it announced a plan to provide media outlets with two versions of games, one with Denuvo included and one without, to prove it has no impact on performance.
That's just not true, here are a few off the top of my head:
RAM is actually the one resource I run out of in my day to day work as a software developer, and I get close on my gaming PC. I have a really fast SSD in my work computer (MacBook Pro) and my Linux gaming PC (some fast NVME drive), and both grind to a halt when I start swapping (Linux seems to handle it better imo). So no, I don't think SSDs are enough by any stretch of the imagination.
If anything, our need for high performance RAM is higher today than ever! My SIL just started a graphics program (graphic design or UI/UX or something), so I advised her to prioritize a high amount of RAM over a high number of CPU/GPU cores because that's how important RAM is to the user experience when deadlines approach.
Large CPU caches are great, but I don't think you can really compensate for low system memory by having large caches and a fast SSD. What is obvious, though, is that memory latency and bandwidth is an issue, so I could see more Apple-style soldered NAND next to the CPU in the coming board revisions, which isn't great for DIY systems. NAND modules are just so much cheaper to manufacturer than CPU cache, and they're also sensitive to heat, so I don't think embedding them on the CPU die is a great long term solution. I would prefer to see GPU-style memory modules either around or behind the CPU, soldered into the board, before we see on-die caches with multiple GB capacity.
Well you're right that it's not practical now. By "soon" I was thinking of like 10+ years from now. And as I said, it would likely start in systems that aren't used for those applications anyway (aside from web browsers, which use way more ram than necessary anyway). By the time it takes over the applications you listed, we'll have caches as big as our current ram anyway. And I'm using a loose definition of cache, I really just mean on-package memory of some kind. And we probably will see that GPU style memory before it's fully integrated.
It's already sort of a thing in embedded processors, such as ARM SOCs where RAM is glued to the top of the CPU package (I think the OG Raspberry Pi did that). But current iterations run the CPU way too hot for that to work, so the RAM is separate.
I could maybe see it be a thing in kiosks and other limited purpose devices (smart devices, smart watches, etc), but not for PCs, servers, or even smart phones, where we expect a lot higher memory load/multitasking.