[-] wischi@programming.dev 1 points 3 weeks ago* (last edited 3 weeks ago)

Not only for audio, but everything that doesn't have to be an exact base 10 representation (like money). Anything that represents something "analog" or "measured" is perfectly fine to store in a float. Temperature, humidity, windspeed, car velocity, rocket acceleration, etc. Calculations with floats are perfectly accurate and given the same bit length are as accurate as decimal types. The only thing they can't do is exactly(!) represent base 10 decimals but for a very large amount of applications that doesn't matter.

[-] wischi@programming.dev 1 points 3 weeks ago

That's not really true and it depends on what you mean. If your decimal datatype has the same number of bits it's not more accurate than base 2 floats. This is often hidden because many decimal implementations aren't 64 bit but 128 bit or more. But what it can do is exactly represent base 10 numbers which is not a requirement for a lot of applications.

You can use floats everywhere where you don't need numbers to be base 10. With base 2 floats the operations couldn't be more accurate given the limit of 64 bits. But if you write f64 x = 0.1; and one assumes that the computer somehow stored 0.1 inside x they already made a wrong assumption. 0.1 can't be converted into a float because it's a periodic in base 2. A very very pedantic compiler wouldn't even let you compile that and force you to pick a value that actually can be represented.

Down the rabbit hole: https://zeta.one/floats-are-not-inaccurate/

[-] wischi@programming.dev 1 points 3 weeks ago

But that's not because floats are inaccurate. A very very pedantic compiler wouldn't even let you write f64 x = 0.1; because 0.1 (and also 0.2 and 0.3) can't be converted to a float exactly (note that 0.5, 0.25, 0.125, etc. can be stored exactly!)

The moment you write f64 x = 0.1; and expect the computer to store that inside a float you already made a wrong assumption. What the computer actually stores is the float value that is as close as possible to 0.1. But not because floats are inaccurate, but because floats are base 2. Note that floating point types in general don't have to be base 2 - they can be any base (for example decimal types are base 10) but IEEE754 floats are base 2, because it allows for simpler hardware implementations.

An even more pedantic compiler would only let you write floating point in binary like 10.10110001b and let you do the conversation, because it would make it blatantly obvious that most base 10 decimals can't even be converted without information loss. So the "inaccuracy" is not(!) because float calculations are inaccurate but because many people wrongly assume that the base 10 literal they wrote can be stored inside a float.

Floats are actually really accurate (ignoring some Intel FPU hardware bugs). I skipped a lot of details which you can find here: https://zeta.one/floats-are-not-inaccurate/

Equipped with that knowledge your calculation 0.1+0.2 != 0.3 can simply be translated into: "The closest float to 0.1" + "The closest float to 0.2" is not equal to "The closest float to 0.3". Keep in mind that the addition itself is perfectly accurate and without any error/rounding(!) on every EEE754 conforming implementation.

[-] wischi@programming.dev 1 points 8 months ago

True, but It's still not what I meant unless they kill those humans. The employees that did that work before still need the 100W. It might be that they can now do something else (or just be unemployed) but the net energy usage is not going down.

[-] wischi@programming.dev 1 points 11 months ago

Nice. Haven't thought about that 🤣

[-] wischi@programming.dev 1 points 11 months ago

It doesn't matter if you divide ln(2) or x by three, it's the same thing.

[-] wischi@programming.dev 1 points 11 months ago

They won't open source it because the rust code is very likely a joke. They are proud of just using two dependencies, don't know that their "statically generated" stuff is actually called server side rendering and are hosting this stuff on a fuckin laptop.

It's probably a project that will teach them a lot. But in practice their implementation is worthless to everybody else because they are obviously completely inexperienced.

That said, that project is likely not worthless to them because they will probably learn a ton of stuff why it's hard to build a search engine.

[-] wischi@programming.dev 1 points 1 year ago

Probably a quick flash and it's gone 🤣

[-] wischi@programming.dev 1 points 1 year ago* (last edited 1 year ago)

There is a benefit in using 1000 because it's consistent with all the other 1000 conversions from kg to gramm, km to meter, etc. And you can do it in your head because we use a base 10 number system.

36826639 bytes are 36.826639 MB. But how many MiB? I don't know, I couldn't tell you without a calculator.

[-] wischi@programming.dev 1 points 1 year ago* (last edited 1 year ago)

The underlying chips certainly are exact powers of two but the drive size you get as a consumer is practically never an exact power of two, that's why it doesn't really make sense to divide by 1024.

The size you provided would be 500107862016 / 1024 / 1024 / 1024 = 465.76174163818359375 GiB

Divided by 1000³ it would be 500.107862016 GB, so both numbers are not "pretty" and would've to be rounded. That's why there is no benefit in using 1024 for storage devices, even SSDs.

The situation is a bit different with RAM. 16 "gig" modules are exactly 17179869184 bytes. https://www.wolframalpha.com/input?i=prime+factors+of+17179869184

So you could say 17.179869184 GB or 16 GiB. Note that those 16 GiB are not rounded and the exact number of bytes for that RAM module. So for memory like caches, RAM, etc. it definitely makes sense to use binary prefixes with 1024 conversion but for storage devices it wouldn't make a difference because you'd have to round anyway.

[-] wischi@programming.dev 1 points 1 year ago

Not even SSDs are. Do you have an SSD? You should lookup the exact drive size in bytes, it's very likely not an exact power of two.

[-] wischi@programming.dev 1 points 2 years ago

It's not unheard of no, but if you have to rule out two for some reason it's because of some other arbitrary choice. In the first instance (haven't yet looked at the second and third one) it has to do with the fact that a sum of "two" was chosen arbitrary. You can come up with other things that requires you to exclude primes up to five.

view more: ‹ prev next ›

wischi

joined 2 years ago