From a programmer and optimizer perspecpective, I always prefer the original binary definitions for memory sizes.
Like, I prefer the speed and convenience of being able to perform bit shifts within a binary system to quickly multiply and divide by powers of 2, without the headache of having to think in decimal.
The whole base 10 thing is more meant for the average consumer and marketing, not those that actually understand the binary nature of the machine.