19
all 20 comments
sorted by: hot top controversial new old
[-] over_clox@lemmy.world 12 points 3 days ago* (last edited 3 days ago)

From a programmer and optimizer perspecpective, I always prefer the original binary definitions for memory sizes.

Like, I prefer the speed and convenience of being able to perform bit shifts within a binary system to quickly multiply and divide by powers of 2, without the headache of having to think in decimal.

The whole base 10 thing is more meant for the average consumer and marketing, not those that actually understand the binary nature of the machine.

[-] CombatWombatEsq@lemmy.world 4 points 3 days ago

For sure. I do think, though, that most people expect the base 10 versions, so even though I prefer to work in kib, I always quote kb in user-facing documentation.

[-] brygphilomena@lemmy.dbzer0.com 3 points 3 days ago

I always still use powers of 2 for everything. Even though I can have any hdd size I want with virtualization, I still do it as power of 2.

For anything consumer facing marketing, sure it's 1000. But it just makes sense to keep programming with 1024.

[-] LodeMike@lemmy.today 1 points 3 days ago

Okay, so you can take 0.1 seconds to write a lowercase "i".

[-] over_clox@lemmy.world 4 points 3 days ago

No, I learned programming back when programmers actually worked in binary and sexadecimal (Ok IBM fanboys, they call that hexadecimal now, since IBM doesn't like sex).

I still use the old measurement system, save for the rare occasions I gotta convert for the average layman terms.

It tells a lot really quick when talking to someone else, when they don't understand why 2^10 (1024) is the underlying standard that the CPU likes.

Oh wait, there's a 10 in (2^10)...

Wonder where that came from?.. ๐Ÿค”

I dunno, but bit shift binary multiplications and divisions are super fast in the integer realm, but get dogshit slow when performed in the decimal realm.

[-] LodeMike@lemmy.today 2 points 3 days ago

I'm not denying any of that. You can just be precise, is all.

[-] over_clox@lemmy.world 3 points 3 days ago

But if you fall into the folly of decimal on a device inherently meant to process binary, then you might allocate an array of 1000 items, rather than the natural binary of 1024, leading to a chance of a memory overflow...

Like, sell by the 1000, but program by the 1024.

[-] LodeMike@lemmy.today 1 points 2 days ago

All the more reason to be precise

[-] Xavienth@lemmygrad.ml 1 points 3 days ago

It's less for the consumer and more of an SI/IEC/BIPM thing. If the prefix k means about 1000 depending on the context, that can cause all sorts of problems. They maintain that k means strictly 1000, for good reason.

[-] qprimed@lemmy.ml 1 points 3 days ago

KiB for FTW :-)

been using it for years now when i need to be precise. colloquially, everyone i know still understands that contextually, K is 2^10

[-] over_clox@lemmy.world 3 points 3 days ago

I'm not all about jamming a random i into terminology that was already well defined decades ago. But hey, you go for it if that's what you prefer.

By the way, 'for FTW' makes about as much sense as saying 'ATM machine', it's redundant.

[-] qprimed@lemmy.ml 3 points 3 days ago* (last edited 3 days ago)

yup! serves me right for responding while rushing out of the door. gonna leave that here for posterity.

edit: and... switching networks managed to triple post this response. i think thats enough internet for today.

[-] over_clox@lemmy.world 2 points 3 days ago

LOL, redundancy FTW ๐Ÿ‘๐Ÿ˜‚๐Ÿคฃ

[-] otacon239@lemmy.world 3 points 3 days ago

Smh my head

[-] nous@programming.dev 1 points 3 days ago

KiB was defined decades ago... Way back in 1999. Before that it was not well defined. kb could mean binary or decimal depending on what or who was doing the measurements.

[-] over_clox@lemmy.world 1 points 3 days ago* (last edited 3 days ago)

And? I started programming back in 1996, back when most computer storage and memory measurements were generally already well defined, around the base 2 binary system.

Floppy disks were about the only exception, 1.44MB was indeed base 10, but built on top of base 2 for cluster size. It was indeed a clusterfuck. 1.44MB was technically 1.38MiB when using modern terms.

I do wonder sometimes how many buffer overflow errors and such are the result of 'programmers' declaring their arrays in base 10 (1000) rather than base 2^10 (1024)... ๐Ÿค”

[-] qprimed@lemmy.ml 1 points 3 days ago

yup! serves me right for responding while rushing out of the door. gonna leave this here for posterity.

[-] corsicanguppy@lemmy.ca 5 points 3 days ago

Kibibytes can fuck off.

this post was submitted on 03 Feb 2026
19 points (95.2% liked)

Hacker News

4221 readers
222 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 1 year ago
MODERATORS