104
Why is Gnome fractional scaling 1.7518248558044434 instead of 1.75?
(unix.stackexchange.com)
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Floating point error? Yeaahhh no. No. Just... no. That is NEVER as big as 0.01 unless the number is also insanely massive.
The error is relative in scale. It's not magically significant fractions off.
TBF the error can become that big if you do a bunch of unstable operations (i.e. operations that continue to increase the relative error), though that's probably not what is happening here.
To get to 0.01 error, you'd need to add up trillions of trillions of floating point errors. It will not happen solely because of floating point unless you're doing such crazy math that you shouldn't be using primitives in the first place.
That's why I said unstable operations. Addition is considered a stable operation (for values with the same sign)
0.001, but still
As the answer in the link explains, it's adjustment of your scaling factor to the nearest whole pixel, plus a loss of precision rounding to/from single/double floating point values.
So I'm not really sure of the point of this post. It's not a question, as the link quite effectively answers it. It's more just "here's why your scaling factor looks weird in your gnome config file", and it's primarily the first reason - rounding to whole pixels.