If you store the the textures in a format without hardware decoding support though, then I guess you would only get the bus speed advantage if you decode in a compute shader. Is that what people do? Or is it the storage bus that people care about and not pcie? I guess even if you decoded on CPU you'd get faster retrieval from storage, even though the CPU would introduce a bit of latency it might be less than what the disk would have...
In a video game, you can walk up close enough to a wall that only 5% of the texture fills your entire screen. That means the texture is very zoomed in and you can clearly see the lack of detail in even a 4k texture, even when on a 1080p monitor.
You're also not "pushing 4x the pixels through the game" the only possible performance downside is the vram usage (and probably slightly less efficient sampling)
Is there any way an additional decompression step can be done without increasing load times and latency?
Once gpu hardware becomes good enough that even low end computers can support real time ray tracing at usable speeds, game developers will be able to remove the lightmaps, ao maps, etc that usually comprise a very significant fraction of a game's total file size. The problem with lightmaps is that even re-used textures still need to use different lightmaps, and you also need an additional 3d grid of baked light probes to light dynamic objects in the scene.
Activision has a very interesting lighting technique that allows some fairly good fidelity from slightly lower resolution lightmaps (allowing normal maps and some geometry detail to work over a single lightmap texel) in combination with surface probes and volume probes, but it's still a fairly significant amount of space. It also requires nine different channels afaik instead of the three that a normal lightmap would have. (https://advances.realtimerendering.com/s2024/content/Roughton/SIGGRAPH%20Advances%202024%20-%20Hemispheres%20Presentation%20Notes.pdf)
4k textures do not become magically useless when you have a 1080p monitor. The thing about video games is that the player can generally move their head anywhere they want, including going very close to any texture.
The problem is, if you used normal compression formats, you would have to decompress them and then recompress them with the GPU supported formats every time you wanted to load an asset. That would either increase load times by a lot, or make streaming in new assets in real time much harder.
Blender was also used a bit in Everything Everywhere All At Once
Microsoft had relatively interesting ideas concerning 3D and VR content, then proceeded to do an extremely mediocre execution, simultaneously dumbing everything down while also making it hard to use, and then proceeded to discontinue their software after almost never touching it again for seven years
I have a Reverb G2 (windows mixed reality headset), it is really a good headset and is still competitive with the Quest 3 in several areas for use on PC. The WMR software itself isn't that bad and I think if it had more care and attention put into it it could genuinely have been great. If they had better home options, user created homes, more customization and the ability to fix things in place so you don't accidentally move them, the ability to add (even just user created) minigames and dynamic objects that stay in the world, and (most importantly) the ability to actually invite other people into the space to play with you and launch into other games. They're Microsoft, they were large enough and early enough that I'm sure they could even have gotten game developers on board with some protocol that automatically brings people you're playing with into a multiplayer session of whatever game you start. I think they were onto something with their home system and could have fleshed the software out into something much better than even the modern competition. Of course it's all discontinued now, the latest version of Windows doesn't even support it, I plan to continue to use the old version until it stops getting security patches in 2026 and then switch to Linux where hopefully the open source people will finally fully support using controllers.
shocking: users of open-source reddit alternative like open-source things
not really a joke article because the guy did make it, but it also isn't a product, it was just an 'art project' by the guy
They massively changed the UI in 2019, in version 2.8. Hasn't changed much since then though.
If you remember Blender having a bad-looking light grey UI and no support for multiple workspaces, that's the old version.
I think the only actual performance downsides once everything is already loaded into vram should be from the sampled values being less often concurrent in memory (which shouldn't matter at all if mipmaps are being used)
What other step could decrease performance?