There are a few basic terms that are simple and easy to understand:
* A bit (b) is the smallest unit of data comprised of just {0,1}
* 1 nibble (-) = 4 bits (cutesy term with limited usage; mostly bitfields)
* 1 byte (B) = 8 bits (you could also say 2 nibbles, but that’s rare)
To convert between bits and bytes (with any prefix), just multiple or divide by eight; nice and simple.
Now, things get a little more complicated because there are two systems of measuring large groups of data: decimal and binary. For years, computer programmers and engineers just used the same terms for both, but the confusion eventually evoked some attempts to standardize a proper set of prefixes.
Each system uses a similar set of prefixes that can be applied to either bits or bytes. Each prefixes start the same in both systems, but the binary ones sound like baby-talk after that.
The decimal system is base-10 which most people are used to and comfortable using because we have 10 fingers. The binary system is base-2 which most computers are used to and comfortable using because they have two voltage states.
The decimal system is obvious and easy to use for most people (it’s simple enough to multiply in our heads). Each prefix goes up by 1,000 (the reason for that is a whole different matter).
The binary system is much harder for most non-computer people to use, and even programmers often can’t multiple arbitrarily large numbers in their heads. Nevertheless, it’s a simple matter of being multiples of two. Each prefix goes up by 1,024. One “K” is 1,024 because that is the closest power of two to the decimal “k” of 1,000 (this may be true at this point, but the difference rapidly increases with each successive prefix).
The numbers are the same for bits and bytes that have the same prefix.
* Decimal:
* 1 kilobyte (kB) = 1,000 B = 1,000^1 B 1,000 B
* 1 megabyte (MB) = 1,000 KB = 1,000^2 B = 1,000,000 B
* 1 gigabyte (GB) = 1,000 MB = 1,000^3 B = 1,000,000,000 B
* 1 kilobit (kb) = 1,000 b = 1,000^1 b 1,000 b
* 1 megabit (Mb) = 1,000 Kb = 1,000^2 b = 1,000,000 b
* 1 gigabit (Gb) = 1,000 Mb = 1,000^3 b = 1,000,000,000 b
* …and so on, just like with normal Metric units meters, liters, etc.
* each successive prefix is the previous one multiplied by 1,000
* Binary:
* 1 kibibyte (KiB) = 1,024 B = 1,024^1 B 1,024 B
* 1 mebibyte (MiB) = 1,024 KB = 1,024^2 B = 1,048,576 B
* 1 gibibyte (GiB) = 1,024 MB = 1,024^3 B = 1,073,741,824 B
* 1 kibibit (Kib) = 1,024 b = 1,024^1 b = 1,024 b
* 1 mebibit (Mib) = 1,024 Kb = 1,024^2 b = 1,048,576 b
* 1 gibibit (Gib) = 1,024 Mb = 1,024^3 b = 1,073,741,824 b
* …and so on, using similar prefixes as Metric, but with funny, ebi’s and ibi’s
* each successive prefix is the previous one multiplied by 1,024
Notice that the difference between the decimal and binary system starts small (at 1K, they’re only 24 bytes, or 2.4% apart), but grows with each level (at 1G, they are >70MiB, or 6.9% apart).
As a general rule of thumb, hardware devices use decimal units (whether bits or bytes) while software uses binary (usually bytes).
This is the reason that some manufacturers, particularly drive mfgs, like to use decimal units, because it makes the drive size sound larger, yet users get frustrated when they find it has less than they expected when they see Windows et. al. report the size in binary. For example, 500GB = 476GiB, so while the drive is made to contain 500GB and labeled as such, My Computer displays the binary 476GiB (but as “476GB”), so users wonder where the other 23GB went. (Drive manufacturers often add a footnote to packages stating that the “formatted size is less” which is misleading because the filesystem overhead is nothing compared to the difference between decimal and binary units.)
Networking devices often use bits instead of bytes for historical reasons, and ISPs often like to advertise using bits because it makes the speed of the connections they offer sound bigger: 12Mibps instead of just 1.5MiBps. They often even mix and match bits and bytes and decimal and binary. For example, you may subscribe to what the ISP calls a “12MBps” line, thinking that you are getting 12MiBps but actually just receive 1.43MiBps (12,000,000/8/1024/1024).
It would have helped adoption if they hadn't chosen prefixes that flat out sound stupid; even an acronym requires someone to "mentally aspirate" the word. I simply will never use "kibibyte", etc. – tgm1024--Monica was mistreated – 2017-01-11T14:43:35.993
12There will be confusion for years to come. In the early days of computing, people spotted that it was clearly much easier to work with factors of 1024 rather than 1000 for computers. Therefore, for decades, the standard SI prefix "kilo" was (and still very often is) used for the non-standard 1024, and it became a de-facto standard in computing. Except that some people still used the SI 1000 anyway. To sort out the mess, "kibi" is now officially defined as a 1024 factor - but it came far too late for an easy transition. "kilo" will be regularly used/abused for 1024 factors for a while yet. – Steve314 – 2011-11-15T06:46:18.547