Agreed. For the naysayers out there, consider these problems:
* You have 1 "MB" of RAM on a 1 MHz system bus which can transfer 1 byte per clock cycle. How many seconds does it take to read the entire memory?
* You have 128 "GB" of RAM and you have an empty 128 GB SSD. Can you successfully hibernate the computer system by storing all of RAM on the SSD?
* My camera shoots 6000×4000 pixels = exactly 24 megapixels. If you assume RGB24 color (3 bytes per pixel), how many MB of RAM or disk space does it take to store one raw bitmap image matrix without headers?
The SI definitions are correct: kilo- always means a thousand, mega- always means a million, et cetera. The computer industry abused these definitions because 1000 is close to 1024, creating endless confusion. It is a idiotic act of self-harm when one "megahertz" of clock speed is not the same mega- as one "megabyte" of RAM. IEC 60027 prefixes are correct: there is no ambiguity when kibi- (Ki) is defined as 1024, and it can coexist beside kilo- meaning 1000.
The whole point of the metric system is to create universal units whose meanings don't change depending on context. Having kilo- be overloaded (like method overloading) to mean 1000 and 1024 violates this principle.
If you want to wade in the bad old world of context-dependent units, look no further than traditional measures. International mile or nautical mile? Pound avoirdupois or Troy pound? Pound-force or pound-mass? US gallon or UK gallon? US shoe size for children, women, or men? Short ton or long ton? Did you know that just a few centuries ago, every town had a different definition of a foot and pound, making trade needlessly complicated and inviting open scams and frauds?
SI units are attempting to fix standard measurements with perceived constants in nature. A meter(Distance) is the distance light travels in a vacuum, back and forth, within a certain amount of ossilations of a cesium atom(Time). This doesn't mean we tweak the meter to conform to observational results as we'd all be happier if light really was 300 000KM/s instead of ~299 792km/s.
Then there's the problem of not mixing different measurement units. SI was designed to conform all measurements to the same base 10 exponents (cm, m, km versus feet inches and yards) But the authors attempt to resolve this matter doesn't even conform to standardised SI units as we would expect them to.
What is a byte? Well, 8 bits, sometimes. What is a kilobit? 1000 Bits What is a kilobyte? 1000 Bytes, or 1024 Bytes.
Now we've already mixed units based on what a bit or a byte even is and the addition of the 8 multiplier in addition to the exponent of 1000 or 1024.
And if you think, hey, at least the bit is the least divisible unit of information, That's not even correct. If there Should* be a reformalisation of information units, you would agree that the amount of "0"'s is the least divisible unit of information. A kilo of zero's, would be 1000. A 'byte' would be defined as containing up to 256 zero's. A Megazero would contain up to a million zero's.
It wouldn't make any intuitive sense for anyone to count 0's, which would automatically convert your information back to base 10, but it does prove that the most sensible unit of information is already what we've had before, that is, you're not mixing bytes (powers of 2) with SI-defined units of 1000
Anyway, here's my contribution to help make everything worse. I think we should use Kylobyte, etc. when we don't care whether it's 1000 or 1024. KyB. See! Works great.
It would be nice to have a different standard for decimal vs. binary kilobytes.
But if Don Knuth thinks that the "international standard" naming for binary kilobytes is dead on arrival, who am I to argue?
Approximating metric prefixing with kibi, Mibi, Gibi... is confusing because it doesn't make sense semantically. There is nothing base-10-ish about it.
I propose some naming based on shift distance, derived from the latin iterativum. https://en.wikipedia.org/wiki/Latin_numerals#Adverbial_numer...
* 2^10, the kibibyte, is a deci (shifted) byte, or just a 'deci'
* 2^20, the mibibyte, is a vici (shifted) byte, or a 'vici'
* 2^30, the gibibyte, is a trici (shifted) byte, or a 'trici'
I mean, we really only need to think in bytes for memory addressing, right? The base doesn't matter much, if we were talking exabytes, does it?
* Yeah, I read the article. Regardless of the IEC's noble attempt, in all my years of working with people and computers I've never heard anyone actually pronounce MiB (or write it out in full) as "mebibyte".
Donald Knuth himself said[1]:
> The members of those committees deserve credit for raising an important issue, but when I heard their proposal it seemed dead on arrival --- who would voluntarily want to use MiB for a maybe-byte?! So I came up with the suggestion above, and mentioned it on page 94 of my Introduction to MMIX. Now to my astonishment, I learn that the committee proposals have actually become an international standard. Still, I am extremely reluctant to adopt such funny-sounding terms; Jeffrey Harrow says "we're going to have to learn to love (and pronounce)" the new coinages, but he seems to assume that standards are automatically adopted just because they are there.
If Gordon Bell and Gene Amdahl used binary sizes -- and they did -- and Knuth thinks the new terms from the pre-existing units sound funny -- and they do -- then I feel like I'm in good company on this one.
0: https://honeypot.net/2017/06/11/introducing-metric-quantity....
The author doesn’t actually answer their question, unless I missed something?
They go on to make a few more observations, and say finally only that the current different definitions are sometimes confusing, to non experts.
I don’t see much of an argument here for changing anything. Some non experts experience minor confusion about two things that are different, did I miss something bigger in this?
I gave some examples in my post https://blog.zorinaq.com/decimal-prefixes-are-more-common-th...
This ambiguity is documented at least back to 1984, by IBM, the pre-eminent computer company of the time.
In 1972 IBM started selling the IBM 3333 magnetic disk drive. This product catalog [0] from 1979 shows them marketing the corresponding disks as "100 million bytes" or "200 million bytes" (3336 mdl 1 and 3336 mdl 11, respectively). By 1984, those same disks were marketed in the "IBM Input/Output Device Summary"[1] (which was intended for a customer audience) as "100MB" and "200MB"
0: (PDF page 281) "IBM 3330 DISK STORAGE" http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
1: (PDF page 38, labeled page 2-7, Fig 2-4) http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
Also, hats off to http://electronicsandbooks.com/ for keeping such incredible records available for the internet to browse.
-------
Edit: The below is wrong. Older experience has corrected me - there has always been ambiguity (perhaps bifurcated between CPU/OS and storage domains). "And that with such great confidence!", indeed.
-------
The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
The author's wishful thinking is falsely presented as fact.
I think kilobyte was the wrong word to ever use for 1024 bytes, and I'd love to go back in time to tell computer scientists that they needed to invent a new prefix to mean "1,024" / "2^10" of something, which kilo- never meant before kilobit / kilobyte were invented. Kibi- is fine, the phonetics sound slightly silly to native English speakers, but the 'bi' indicates binary and I think that's reasonable.
I'm just not going to fool myself with wishful thinking. If, in arrogance or self-righteousness, one simply assumes that every time they see "kilobyte" it means 1,000 bytes - then they will make many, many failures. We will always have to take care to verify whether "kilobyte" means 1,000 or 1,024 bytes before implementing something which relies on that for correctness.
Because Windows, and only Windows, shows it this way. It is official and documented: https://devblogs.microsoft.com/oldnewthing/20090611-00/?p=17...
> Explorer is just following existing practice. Everybody (to within experimental error) refers to 1024 bytes as a kilobyte, not a kibibyte. If Explorer were to switch to the term kibibyte, it would merely be showing users information in a form they cannot understand, and for what purpose? So you can feel superior because you know what that term means and other people don’t.
You can use `--si` for fake, 1000-byte kilobytes - trying it it seems weird that these are reported with a lowercase 'k' but 'M' and so on remain uppercase.
KB is 1024 bytes, and don't you dare try stealing those 24 bytes from me
It's the same reason—for pure marketing purposes—that screens are measured diagonally.
I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
Why don’t kilobyte continue to mean 1024 and introduce kilodebyte to mean 1000. Byte, to me implies a binary number system, and if you want to introduce a new nomenclature to reduce confusion, give the new one a new name and let the older of more prevalent one in its domain keep the old one…
All they had to say was that the KiB et. al. were introduced in 1998, and the adoption has been slow.
And not “but a kilobyte can be 1000,” as if it’s an effort issue.
Published on 11/01/2026
Updated on 15/01/2026
When it comes to computer memory, we usually learn that a kilobyte is 1024 bytes, a megabyte is 1024 kilobytes, and so on. But what if I told you that it's not necessarily true and 1 kilobyte can be 1000 bytes? And what's more, this makes even more sense.
Table of contents
Since computers work in a binary system (base 2), the memory is also addressed in binary. This means it's quite impractical to use memory addresses or produce RAM sticks with memory amounts that are not multiples of powers of 2. From the powers of 2 we chose 1024 (210) as the base order of magnitude, since it's very close to 1000 (2.4% difference) and it's not insanely large. So, in practice we often consider kilo as 1024 (210), mega as 1048576 (220), giga as 1073741824 (230), etc.
While binary kilo, mega and giga units are close to their decimal counterparts, some might already notice that the larger are the units, the more is the proportional inaccuracy. In order to illustrate, let's increase the units:
| Unit | Decimal value | Binary value | Relative difference |
|---|---|---|---|
| Kilo | 1000 | 1024 | 2.4% |
| Mega | 1000000 | 1048576 | ≈ 4.8% |
| Giga | 1000000000 | 1073741824 | ≈ 7.3% |
| Tera | 1000000000000 | 1099511627776 | ≈ 10% |
| Peta | 1015 | ~ 1.126 * 1015 | ≈ 12.6% |
| Exa | 1018 | ~ 1.153 * 1018 | ≈ 15.3% |
| Zetta | 1021 | ~ 1.181 * 1021 | ≈ 18.1% |
| Yotta | 1024 | ~ 1.209 * 1024 | ≈ 20.9% |
| Ronna | 1027 | ~ 1.238 * 1027 | ≈ 23.8% |
| Quetta | 1030 | ~ 1.268 * 1030 | ≈ 26.8% |
For 1 quettabyte the inaccuracy is already larger than a quarter. Even for 1 terabyte the difference is noticeable, around 10%. This problem often happens when hardware manufacturers (such as HDD or SSD) advertise the memory capacity in decimal units while the operating system might show in binary units.

Missing 70 gigabytes
For smaller amounts of memory the binary representation is pretty close to the decimal one, but diverges for huge amounts of memory.
This "kilobyte = 1024 bytes" rule is actually an old (often confusing) convention. In the tech industry there is still huge inertia, this old convention is still used by RAM manufacturers (JEDEC), tons of software and some operating systems (such as Windows). Interestingly, storage vendors often prefer the decimal convention, which creates even more confusion (mentioned above).
In order to solve this confusion, International Electrotechnical Commission introduced binary prefixes for binary units:
| Binary units (International Electrotechnical Commission) | Decimal units (International System of Units) |
|---|---|
| Unit | Value |
| --- | --- |
| KiB (kibibyte) | 10241 |
| MiB (mebibyte) | 10242 |
| GiB (gibibyte) | 10243 |
| TiB (tebibyte) | 10244 |
| PiB (pebibyte) | 10245 |
| EiB (exbibyte) | 10246 |
| ZiB (zebibyte) | 10247 |
| YiB (yobibyte) | 10248 |
| RiB (robibyte) | 10249 |
| QiB (quebibyte) | 102410 |
The guidance is: SI prefixes are powers of 10 only, and if you mean powers of 2 you should use IEC binary prefixes (Ki, Mi, Gi, …).
There is still a lot of inertia to equate 1 kilobyte to 1024 bytes. While this is usually acceptable depending on the context, it can sometimes cause confusion, especially for non-technical people.
Kudos for getting back. (and closing the tap of "you are wrong" comments :))
You need character to admit that. I bow to you.
Which doesn't make it more correct, of course, even through I strongly believe believe that it is (where appropriate for things like memory sizes). Just saying, it goes much further back than 1984.
That being said, I think the difference between mib and mb is niche for most people
90 mm floppy disks. https://jdebp.uk/FGA/floppy-discs-are-90mm-not-3-and-a-half-...
Which I have taken to calling 1440 KiB – accurate and pretty recognizable at the same time.
> I'm a big fan of binary numbers, but I have to admit that this convention flouts the widely accepted international standards for scientific prefixes.
He also calls it “an important issue” and had written “1000 MB = 1 gigabyte (GB), 1000 GB = 1 terabyte (TB), 1000 TB = 1 petabyte (PB), 1000 PB = 1 exabyte (EB), 1000 EB = 1 zettabyte (ZB), 1000 ZB = 1 yottabyte (YB)” in his MMIX book even before the new binary prefixes became an international standard.
He is merely complaining that the new names for the binary prefixes sound funny (and has his own proposal like “large megabyte” and notation MMB etc), but he's still using the kilo/mega/etc prefixes with decimal meanings.
"I will not sacrifice my dignity. We've made too many compromises already; too many retreats. They invade our space and we fall back. They assimilate entire worlds with awkward pronunciations. Not again. The line must be drawn here! This far, no further! And I will make them pay for what they've done to the kilobyte!"
In fact, they practically say the same exact thing you have said: In a nutshell, base-10 prefixes were used for base-2 numbers, and now it's hard to undo that standard in practice. They didn't say anything about making assumptions. The only difference is that that the author wants to keep trying, and you don't think it's possible? Which is perfectly fine. It's just not as dramatic as your tone implies.
There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.
Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?
1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.
Here's my theory. In the beginning, everything was base10. Because humans.
Binary addressing made sense for RAM. Especially since it makes decoding address lines into chip selects (or slabs of core, or whatever) a piece of cake, having chips be a round number in binary made life easier for everyone.
Then early DOS systems (CP/M comes to mind particularly) mapped disk sectors to RAM regions, so to enable this shortcut, disk sectors became RAM-shaped. The 512-byte sector was born. File sizes can be written in bytes, but what actually matters is how many sectors they take up. So file sizing inherited this shortcut.
But these shortcuts never affected "real computers", only the hamstrung crap people were running at home.
So today we have multiple ecosystems. Some born out of real computers, some with a heavy DOS inheritance. Some of us were taught DOS's limitations as truth, and some of us weren't.
> The author's wishful thinking is falsely presented as fact.
There's good reason why the meanings of SI prefixes aren't set by convention or by common usage or by immemorial tradition, but by the SI. We had several thousand years of setting weights and measures by local and trade tradition and it was a nightmare, which is how we ended up with the SI. It's not a good show for computing to come along and immediately recreate the long and short ton.
They didn't abuse the definitions. It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes. The 8086 processor had 20 address lines, 2^20, that's 1,048,576 bytes for 1MB. SI units make no sense for computers.
The only problem is unscrupulous hardware vendors using SI units on computers to sell you less capacity but advertise more.
32 Gb ram chip = 4 GiB of RAM.
It doesn't matter. "kilo" means 1000. People are free to use it wrong if they wish.
No, they already did the opposite with KiB, MiB.
Because most metric decimal units are used for non-computing things. Kilometers, etc. Are you seriously proposing that kilometers should be renamed kitrimeters because you think computing prefixes should take priority over every other domain of science and life?
"I bought a two tib SSD."
"I just want to serve five pibs."
Call me calcitrant, reactionary, or whatever, but I will not say kibibyte out loud. It's a dumb word and I'm not using it. It was a horrible choice.
More like late 60s. In fact, in the 70s and 80s, I remember the storage vendors being excoriated for "lying" by following the SI standard.
There were two proposals to fix things in the late 60s, by Donald Morrison and Donald Knuth. Neither were accepted.
Another article suggesting we just roll over and accept the decimal versions is here:
https://cacm.acm.org/opinion/si-and-binary-prefixes-clearing...
This article helpfully explains that decimal KB has been "standard" since the very late 90s.
But when such an august personality as Donald Knuth declares the proposal DOA, I have no heartburn using binary KB.
However it doesn't seem to be divided into sectors at all, more like each track is like a loop of magnetic tape. In that context it makes a bit more sense to use decimal units, measuring in bits per second like for serial comms.
Or maybe there were some extra characters used for ECC? 5 million / 100 / 100 = 500 characters per track, leaves 72 bits over for that purpose if the actual size was 512.
First floppy disks - also from IBM - had 128-byte sectors. IIRC, it was chosen because it was the smallest power of two that could store an 80-column line of text (made standard by IBM punched cards).
Disk controllers need to know how many bytes to read for each sector, and the easiest way to do this is by detecting overflow of an n-bit counter. Comparing with 80 or 100 would take more circuitry.
That's the microcomputer era that has defined the vast majority of our relationship with computers.
IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.
https://www.latimes.com/archives/la-xpm-2007-nov-03-fi-seaga...
You can get away with those on machines with 64 bit address spaces and TFLOPs of math capacity. You can't on anything older or smaller.
Why do you keep insisting the author is denying something when the author clearly acknowledges every single thing you're complaining about?
Also If you open major Linux distro task managers, you'll be surprised to see that they often show in decimal units when "i" is missing from the prefix. Many utilities often avoid the confusing prefixes "KB", "MB"... and use "KiB", "MiB"...
Adding to your point, it is human nature to create industry- or context-specific units and refuse to play with others.
In the non-metric world, I see examples like: Paper publishing uses points (1/72 inch), metal machinists use thousands of an inch, woodworkers use feet and inches and binary fractions, land surveyors use decimal feet (unusual!), waist circumference is in inches, body height is in feet and inches, but you buy fabric by the yard, airplane altitudes are in hundreds to tens of thousands of feet instead of decimal miles. Crude oil is traded in barrels but gasoline is dispensed in gallons. Everyone thinks their usage of units and numbers is intuitive and optimal, and everyone refuses to change.
In the metric(ish) world, I still see many tensions. The micron is a common alternate name for the micrometre, yet why don't we have a millin or nanon or picon? The solution is to eliminate the micron. I've seen the angstrom (0.1 nm) in spectroscopy and in the discussion of CPU transistor sizes, yet it diverts attention away from the picometre. The bar (100 kPa) is popular in talking about things like tire pressure because it's nearly 1 atmosphere. The mmHg is a unit of pressure that sounds metric but is not; the correct unit is pascal. No one in astronomy uses mega/giga/tera/peta/etc.-metres; instead they use AU and parsec and (thousand, million, billion) light-years. Particle physics use eV/keV/MeV instead of some units around the picojoule.
Having a grab bag of units and domains that don't talk to each other is indeed the natural state of things. To put your foot down and say no, your industry does not get its own special snowflake unit, stop that nonsense and use the standardized unit - that takes real effort to achieve.
no you didn't, that doesn't exist, you bought 2 trillion bytes, 99 billion bytes short
Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.
Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.
[0] https://pdos.csail.mit.edu/6.828/2005/readings/pdp11-40.pdf
[1] https://deramp.com/downloads/mfe_archive/050-Component%20Spe...
Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.
Many things acquire domain specific nuanced meaning ..
Yes they did. Kilo- means 1000 in SI/metric. The computer industry decided, "Gee that looks awfully close to 1024. Let's sneakily make it mean 1024 in our context and sell our RAM that way".
> It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes.
I'm not disputing that. I'm 100% on board with RAM being manufactured and operated in power-of-2 sizes. I have a problem with how these numbers are being marketed and communicated.
> SI units make no sense for computers.
Exactly! Therefore, use IEC 60027 prefixes like kibi-, because they are the ones that reflect the binary nature of computers. Only use SI if you genuinely respect SI definitions.
If you think 32 Gb are binary gibibits, then you've disagreed with Ethernet (e.g. 2.5 Gb/s), Thunderbolt (e.g. 40 Gb/s), and other communication standards.
That's why I keep hammering on the same point: Creating context-dependent prefixes sows endless confusion. The only way to stop the confusion is to respect the real definitions.
“Kilo” can mean what we want in different contexts and it’s really no more or less correct as long as both parties understand and are consistent in their usage to each other.
It would be annoying of one frequently found themselves calculating gigabytes per hectare. I don't think I've ever done that. The closest I've seen is measure magnetic tape density where you get weird units like "characters per inch", where neither "character" nor "inch" are the common units for their respective metrics.
“A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time.”
(The old excuse was that networks are serial but they haven't been serial for decades.)
For SI units, the abbreviations are defined, so a lowercase k for kilo and uppercase M for mega is correct. Lower case m is milli, c is centi, d is deci. Uppercase G is giga, T is tera and so on.
https://en.wikipedia.org/wiki/International_System_of_Units#...
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).
We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.
I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].
"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.
Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.
[1] https://commons.wikimedia.org/wiki/File:Zilog_Z-80_Microproc...
[2] https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...
[3] http://www.bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-...
But once hard drives started hitting about a gigabyte was when everyone started noticing and howling.
You have to sort of remember that these didn't exist at the time that "kilobyte" came around. The binary prefixes are — relatively speaking — very new.
I'm happy to say it isn't an SI unit. Kilo meaning 1000 makes no sense for computers, so lets just never use it to mean that.
> Therefore, use IEC 60027 prefixes like kibi-,
No. They're dumb. They sound stupid, they were decades too late, etc. This was a stupid plan. We can define Kilo as 1024 for computers -- we could have done that easily -- and just don't call them SI units if that makes people weird. This is how we all actually work. So rather than be pedantic about it lets make the language and units reflect their actual usage. Easy.
I wonder if there's a wikipedia article listing these...
> All words are made up.
Yes, and the made up words of kilo and kibi were given specific definitions by the people who made them up:
* https://en.wikipedia.org/wiki/Metric_prefix
* https://en.wikipedia.org/wiki/Binary_prefix
> […] as long as both parties understand and are consistent in their usage to each other.
And if they don't? What happens then?
Perhaps it would be easier to use the words definitions as they are set up in standards and regulations so context is less of an issue.
Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.
the same and even more confusion is engendered when talking about "fifths" etc.
I don't know if that's correct, but at least it'd explain the mismatch.
"in binary computing traditionally prefix + byte implied binary number quantities."
There are no bytes involved in Hz or FLOPs.
Good for them. People make up their own definitions for words all the time. Some of those people even try to get others to adopt their definition. Very few are ever successful. Because language is about communicating shared meaning. And there is a great deal of cultural inertia behind the kilo = 2^10 definition in computer science and adjacent fields.
It's also stupid because it's rare than anyone outside of programming even needs to care exactly how many bytes something else. At the scales that each of kilobyte, megabyte, gigabyte, terabyte etc are used, the smaller values are pretty much insignificant details.
If you ask for a kilogram of rice, then you probably care more about that 1kg of rice is the same as the last 1kg of rice you got, you probably wouldn't even care how many grams that is. Similarly, if you order 1 ton of rice, you do care exactly how many grams it is, or do you just care that this 1 ton is the same as that 1 ton?
This whole stupidity started because hard disk manufacturers wanted to make their drives sound bigger than they actually were. At the time, everybody buying hard disks knew about this deception and just put up with it. We'd buy their 2GB drive and think to ourselves, "OK so we have 1.86 real GB". And that was the end of it.
Can you just imagine if manufacturers started advertising computers as having 34.3GB of RAM? Everybody would know it was nonsense and call it 32GB anyway.
It's easy to find some that are marketed as 500GB and have 500x10^9 bytes [0]. But all the NVMe's that I can find that are marketed as 512GB have 512x10^9 bytes[1], neither 500x10^9 bytes nor 2^39 bytes. I cannot find any that are labeled "1TB" and actually have 1 Tebibyte. Even "960GB" enterprise SSD's are measured in base-10 gigabytes[2].
0: https://download.semiconductor.samsung.com/resources/data-sh...
1: https://download.semiconductor.samsung.com/resources/data-sh...
2: https://image.semiconductor.samsung.com/resources/data-sheet...
(Why are these all Samsung? Because I couldn't find any other datasheets that explicitly call out how they define a GB/TB)
E.g. Macs measure file sizes in powers of 10 and call them KB, MB, GB. Windows measures file sizes in powers of 2 and calls them KB, MB, GB instead of KiB, MiB, GiB. Advertised hard drives come in powers of 10. Advertised memory chips come in powers of 2.
When you've got a large amount of data or are allocating an amount of space, are you measuring its size in memory or on disk? On a Mac or on Windows?
What the hell is a "kibibyte"? Sounds like a brand of dog food.
I don't know what the better alternative would have been, but this certainly wasn't it.
1. defined traditional suffixes and abbreviations to mean powers of two, not ten, aligning with most existing usages, but...
2. deprecated their use, especially in formal settings...
3. defined new spelled-out vocabulary for both pow10 and pow2 units, e.g. in English "two megabytes" becomes "two binary megabytes" or "two decimal megabytes", and...
4. defined new unambiguous abbreviations for both decimal and binary units, e.g. "5MB" (traditional) becomes "5bMB" (simplified, binary) or "5dMB" (simplified, decimal)
This way, most people most of the time could keep using the traditional units and be understood just fine, but in formal contexts in which precision is paramount, you'd have a standard way of spelling out exactly what you meant.
I'd have gone one step further too and stipulate that truth in advertising would require storage makers to use "5dMB" or "5 decimal megabytes" or whatever in advertising and specifications if that's what they meant. No cheating using traditional units.
(We could also split bits versus bytes using similar principles, e.g. "bi" vs "by".)
I mean consider UK, which still uses pounds, stone, and miles. In contexts where you'd use those units, writing "10KB" or "one megabyte" would be fine too.
It's leagues better than "kibibyte".
Yeah it sounds dumb, but it’s really not that different from your suggestion.