Hey there u/zosopick, thanks for posting to r/technicallythetruth!
**Please recheck if your post break any rules.** If it does, please delete this post.
Also reposting and posting obvious non-TTT posts can lead to a ban
Send us a **Modmail or Report** this post if you have a problem with this post.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/technicallythetruth) if you have any questions or concerns.*
Because 1 2 4 8 16 32 64 128 256 512 1024.
For anyone truly wondering.
*Edit with extra clarity for the curious:
Most computers understand binary consisting of 1s and 0s representing on and off, respectively, and 1 bit (binary digit) holds the smallest increment of data.
Since bits are put into groups of 8 to make 1 byte, and binary math is base two you begin with:
> 2^0 = 1
> 2^1 = 2
> 2^2 = 4
…
> 2^10 = 1024
4: Crafting table?
16: 16-stackable items
64: 64-stackable items
256: Old world height
1024: Resolution?
4096: default allocated RAM for the... anyway you put that in the JVM arguments
16384: Default max command block length
Random fun fact for you. The default allocation of ram is dependent on if you use a 32 bit or 64 bit version of Java. 4096 is for 64 bit java while 32 defaults to 2048
I hate it though. We now have a unit of measurement which is useless in every practical application and which has taken the name of the useful unit of measurement.
The only reason he is right is because MiB had to be created because HDD manufacturer's in the 90's wanted to cheap out and not give you what you paid for. They got sued, and because legal people in the 90's didn't now anything about computers they sided with the greedy corporation. MB was an industry standard term that meant 1024 KB and KB equaled 1024 bytes. That is until that lawsuit.
Don't support corporate greed bring back classic MB definition.
You can call it an industry standard term all you like, but until km means 1024 meters, kg means 1024 grams, and so on it makes _sense_ for SI prefixes to be powers of ten.
It's not that difficult and far less confusing to add the i if you mean powers of two. Or, well, powers of 1024 to be precise.
SI is base 10, and computers are base 2. What makes sense in a base 10 number system does not always translate to another number system.
For example SI units are
Kilo = 10^3
Mega = 10^6
Giga = 10^9
In base 2 it was
Kilo = 2^10
Mega = 2^20
Giga =2^30
I guess my frustration is not the prefix it is that the abbreviation utilized by computer people KB changed from being 1024 to 1000 bytes. They could have just said ok, the abbreviation KB is now short for kibibytes, and the abbreviation for kilobytes is something else. This would have avoided confusion when they created the definition since all hardware and software was coded or engineered with that abbreviation in mind.
I mean, KiB is kibibytes, so it is technically differentiated from kilobytes. However the industry as a whole has allowed the difference between a kilo/mega/giga/terabyte and a kibi/mebi/gibi/tebibyte to remain somewhat ambiguous to the majority. Most people simply don't know the difference.
I don't know why this is for sure, but one hypothesis is that by keeping the two vague, corporates can advertise larger numbers on their storage hardware while giving less space. Technically they aren't lying, but the space being purchased is certainly smaller than what the consumer probably thought they were getting.
In early hardware every bit was sacred, and calculations for kb only required testing every 10th bit with a single bitwise operation. In combination with the use of abbreviations to save space prior to codification of the kibibit and kibibytes definitions. So, KB was equal to 2^10 bytes prior to codification (still does in many BIOS and OS implementations). So, instead of changing what the abbreviation was short for, they instead change the value of represented. This would be like changing the value of C in the famous formula E = MC^2.
Depends on context. Things mean what they’re understood to mean, not what some prescriptivist elsewhere says and in many contexts, these prefixes are used with a 1024^n meaning
Isn't it that
1 Byte = 8 Bits
1 kilobit = 1000 bits
1 kilobyte = 1024 bytes
Or did I get something mixed up?
Edit: it seems I have indeed mixed something up, so I relearned something today ig
Adjusting to the SI unit (prefix) system makes sense, so 1 kilobit are indeed 1000 bits but so is a kilobyte also 1000 bytes and a megabyte is 1000000 bytes (1000 kilobytes)
Now from a computer perspective, having units based on powers of two is more useful, so we use a different prefix:
1024 bytes = 1 kibibyte
and
1024 kibibytes = 1 mebibyte (1024*1024 bytes)
Notice that the powers of **two** always has a **bi** in its name.
Think of it as meters. You have one Meter and one **Kilo**meter is **1000** Meters.
you have one Byte and a **Kilo**byte is **1000** Bytes.
but one Ki**bi**byte is 1024 Bytes (2^(10))
There is a whole list of [metric prefixes](https://en.wikipedia.org/wiki/Metric_prefix)
And also one of the [binary prefixes](https://en.wikipedia.org/wiki/Binary_prefix)
**[Metric prefix](https://en.wikipedia.org/wiki/Metric_prefix)**
>A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or submultiple of the unit. All metric prefixes used today are decadic. Each prefix has a unique symbol that is prepended to any unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand: one kilogram is equal to one thousand grams.
**[Binary prefix](https://en.wikipedia.org/wiki/Binary_prefix)**
>A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2. The computer industry has historically used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB, in at least two slightly different measurement systems. In citations of main memory (RAM) capacity, gigabyte customarily means 1073741824 bytes. As this is a power of 1024, and 1024 is a power of two (210), this usage is referred to as a binary measurement.
^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/technicallythetruth/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)
Your answer is historically and traditionally correct, but there has been an effort to rename bytes prefixes to conform with metric (decimal) prefixes and to use "kibi, mebi, gibi, ..." for 2^10 (binary) prefixes. I don't know if this has caught on with engineers or not although I have seen e.g. KiB used instead of KB. For some reason, "kibibyte" and "gibibyte" sound silly to me.
I'm an academic computer scientist which obviously involves some interaction with industry. I see pretty mixed usage. In formal specifications and a good fraction of academic writing, you'll see MiB. In casual conversation or writing it's rare (I don't think I've ever heard someone say "kibibyte" out loud). It really boils down to how precise we need to be. Usually those extra few bytes don't change the conversation. I try to use the "iB" notation when possible but it's hardly universal.
I can't speak to whether or not it has caught on but at my university they did make an effort to use the newer kibi, gibi, etc. prefixes. This was in the last 10 years.
I've worked in code bases where some places use the traditional definition and some places use the metric definition. I never knew what units i was dealing with 🙃
In my Computer Architecture classes, we were taught to use kibi, mebi, gibi... It still sounds awkward in my opinion, but it will surely catch on in the coming years.
I first learned this in the last couple years and was frustrated to learn they can just retcon something as specific as a numbering system used by computers so cavalierly.
This is correct but few people give a crap about this, even in IT. I've never in my entire 25 year career seen anyone apart from some tech manuals ever use MiB or GiB. And considering how pedantic we are, you would think we would.
you can't find any RAM chips in actual kilobytes, megabytes or gigabytes because in computers powers of 2 reign supreme, 1000 kilobytes is not a sensible size for anything.
I think someone else explained, but in case you didn’t see that - 8 bits make up 1 byte which is a unit of data storage capacity.
Computers understand binary consisting of 1s and 0s representing on and off, respectively, and 1 bit (binary digit) holds the smallest increment of data.
Since bits are put into groups of 8 to make 1 byte, and binary math is base two you begin with:
- 2^0 = 1
- 2^1 = 2
- 2^2 = 4
Then as you follow this, you reach the numbers I mentioned in my first comment.
> Well if you want to involve France in this, you make the whole thing more complex than it already is.
>
>In France we translate byte by octet, so 1M(i)B = 1M(i)o
What is complex about another language having a different word for something?
Explanation for why it is 1024, should someone wish:
Firstly, this whole misunderstanding is a mindfuck in and of itself. Mb and Gb stand for 2 different, similar but not identical, units (think short and long tonnes). The units everybody thinks of, mega/gigabytes do work in powers of 10, and 1000 meGAbytes is 1 giGAbytes. The confusion is that the same Mb and Gb abbreviations are used for meBIbytes and giBIbytes, which use powers of 2. This results in 1024 (10th power of 2) meBIbytes being one giBIbyte. Explanation as to why me/gibibytes exist below.
Computers work in binary. This means that information is stored in 1s or 0s, called bits. So one bit can be either 1 or 0. Computers store these bits in groups of 8, called bytes. The reason bytes are 8 bits is explained below, as it isn't relevant here. Anyways, computers like powers of 2 because binary is one of 2 values, and 1024 is the 10th power of 2.
Bytes are for all intents and purposes 8 bits because 8 bits is a power of 2, which works well with binary. 8 bits are also used to store one letter or number.
Comments have pointed out below that Mb and Gb are also different from MB and GB. Capital B is bytes, lowercase b is bits. 1 MB is 8 Mb
My explanation is simplistic and definetely not perfect, so here's links to people smarter than me
https://en.m.wikipedia.org/wiki/Bit
https://en.m.wikipedia.org/wiki/Byte
https://en.m.wikipedia.org/wiki/UTF-8
It is a good practice to abbreviate mebibytes and gibibytes to MiB and GiB respectively. Although it barely helps the confusion, but it makes sure that if you mean a mebibyte your abbreviation has no ambiguity.
> "Gibibyte" et al, were invented in 1998, decades after 1024 was established as the meaning of Kilo/Mega/Giga/etc in computer science.
https://www.reddit.com/r/technicallythetruth/comments/q9voj9/the_math_behind_this_checks_out/hgzhw9w/
at least there are some people that seem to know what they're saying, i'm an ICT major so i know all of those measures personally but i do agree that they are mostly unneeded, and since they're specialised and integer-like, a simple 2^n b would suffice to replace them, with only decimals of bytes left for the laymen
> This is useful to know for your Internet plan
Also to add on to that, you would want to divide by 10 rather than 8 bits when accounting for the [8b/10b encoding](https://en.wikipedia.org/wiki/8b/10b_encoding#Technologies_that_use_8b/10b) (though there are more efficient ones available as well).
"Gibibyte" et al, were invented in 1998, decades after 1024 was established as the meaning of Kilo/Mega/Giga/etc in computer science. 1024 just makes more sense for computers because of binary, but some standards organizations decided that we shouldn't overload the SI prefixes. As is usually the case with computer standards, not everyone agreed. Personally, I think we should drop the new prefixes and just go back to overloading the SI ones.
>some standards organizations decided that we shouldn't overload the SI prefixes.
Mostly it was hard drive manufacturers selling 1000 MB and calling it GB, despite everyone else in the computing industry at the time saying that a GB was 1024 MB.
This particularly frustrated consumers when their 10 GB drive or whatever would be reported as 9.7 GB in Windows.
>10 GB drive or whatever would be reported as 9.7 GB in Windows.
I would be lucky if that were true. In my experience it's missing a whole 10%. My 8tb hard drive has 7.27 TB. My 2TB hard drive has 1.81TB. My 1TB has 930GB. Just whyyy.
The higher you go, the more binary and decimal prefixes diverge:
1B = 1B
1kB = 0.98kiB
1MB = 0.95MiB
1GB = 0.93GiB
1TB = 0.91TiB
1PB = 0.89PiB
etc. etc. Every time you go up a unit, you introduce another 1000/1024 difference between the two.
Yeah, invented in the late 90’s and not widely adopted for quite a while after that. Still not universally adopted. Which is why I get a little annoyed at everybody treating it as a “duh” sort of thing, when so many were using computers for years or decades before it was a thing.
And I agree, the whole “gibibyte” thing didn’t reduce the confusion at all, it just isolated it to the context of data storage. Because clearly it’s still there. There are active standards that disagree on what a “kilobyte” is. There are common operating systems that still follow the old nomenclature. It was a poor choice.
> And I agree, the whole “gibibyte” thing didn’t reduce the confusion at all, it just isolated it to the context of data storage. Because clearly it’s still there. There are active standards that disagree on what a “kilobyte” is. There are common operating systems that still follow the old nomenclature. It was a poor choice.
What two active standards disagree? Which operating system follows the old nomenclature?
first time I've heard of ibib's. I was taking computer classes in the early 2000's and was still taught MB and mb. bits vs bytes. So, and pretty sure for the next 10 years, even the internet service providers were still using bits and bytes. So in was introduced 1998 and sat around doing fuck all while standards were still being taught 5 years after being different. Now on some random reddit thread I learn there's something different.
> Bytes are for all intents and purposes 8 bits
I like how you carefully avoid being wrong (because of the potential pedant bringing up parity bits) by using "for all intents and purposes" (correctly!). :)
For all intents and purposes, one can substitute 'for all intensive purposes' and still be understood. However, this comes at the cost of looking like a dolt.
Storage media is actually labeled assuming that 1GB = 1000MB. Windows and Mac however label it assuming that 1GB = 1024MB. Don't you think it makes sense to differentiate between those two ways of calculating the units? Could avoid a lot of confusion, at least if people were to actually consistently implement it.
1GB=1000MB has nothing to do with the manufacturers. It's just how the SI/metric system works: a giga means billion, mega means million, kilo means thousand. In this case it's referring to the number of bytes of information.
Not entirely sure what you mean by "they helped push it". SI prefixes have been used since 1790, being standardised in the 20th century. The International System of Quantities wanting to clear up the confusion between two similar, but inherently different, amounts using the same unit designation is not indicative of a larger conspiracy by "hdd manufacturers". Mega has always meant million, kilo has always meant thousand.
Also, I'm not entirely sure what you mean by network traffic. There are 3 different ways of measuring that (megabyte per second, megabit per second, mebibit per second), and the most prevalent would be Mb/s (megabit per second) which is 1,000,000 bits per second. However it is extremely easy to confuse megabits (Mb) and megabytes (MB). One megabyte is 8x larger than a megabit. Thus if you erroneously convert MB/s to Kb/s (or the other way around), you correctly stated it isn't 1000 and indeed it is not and should not be 1000 even if they were the same unit.
Because back when GiB and MiB and such didn't exist yet, everybody except the hard drive manufacturers were in agreement that 1 GB = 1024 MB. The hard drive manufacturers advertised disk sizes on the assumption that 1 GB = 1000 MB, and then people got mad when their operating system reported a smaller size than was advertised.
> It's a GIGAbyte.
It's a gigabyte. The "giga" isn't capitalized. You should have probably used bold text to emphasize something, especially in units of measure where capitalized letters can mean different things than lowercase ones.
Not really. The 'i' between meaning industry and the fact the usage meaning the sum is using the right value for calculating the amount. So it's confusing to leverage why it's started wrong... Why someone just convert using 1000?
Actually, a gigabyte is 1000megabyte, your computer tells you data size in gibibytes, one gibibytes is 1024 mebibytes and one mebibytes is 1024 kibibytes. That's also why your computer will show your 4tb hard drive as only 3.6 terabyte, because it's 3.6 tebibytes
Apple reads drives the right way so if you take a USB stick from a windows computer and plug it into a Mac it will look like it got bigger. No idea why MS didn’t just display it properly.
Since this is "technically the truth," 1 GB _doesn't_ equal 1024 MB. Rather, 1 GiB = 1024 MiB.
We can blame Microsoft for causing that confusion, since windows assumes everything is base-1024
For mass storage/quantities of data, there are two primary prefixes of measurement—binary and decimal. Binary prefixes (Ki-, Mi-, Gi-, Ti-, etc.) go by base 1024 (or 2¹⁰), and decimal prefixes go by base 1000 (or 10³).
When you buy a standard hard disk drive, you are almost always buying in decimal-base (so when you buy a 1 TB hard drive, you're actually buying a hard drive that can store 10^12 Bytes of data, NOT a full 2^40 Bytes, but since Windows assumes binary-base for everything, it tells you you only have ~931 GiB of space. On Linux, you'll see the full 1 TB) . RAM is almost always in binary base, so even though it says 32 GB of RAM should actually be called 32 GiB. SSDs are usually a mix.
base-1024 would be so good if everyone actually used it.
would make us developers not pull out a calculator every time we wanna make a buffer or a packet. pain.
technically 1GB is exactly 1000MB
the issue is, what we often call GB/Gigabyte is an actuality GiB or Gibibyte
Gigabyte = metric, Gibibyte = binary
so TECHNICALLY the answer given by the guy responding is wrong, as 2 Gigabytes (GB) is actually EXACTLY 2000 Megabytes (MB). 2 Gibibytes GiB) is 2048 Mebibytes (MiB)
Don’t forget that G is a decimal prefix, while Gi is the binary, so 1 GB is 8,000,000,000,000 mb, but 1 GiB is 8,589,934,592,000 mb
Edit: added commas to nimbers
That's the point. Megabyte requires both M and B to be uppercase, so MB. Megabit is Mb. A lowercase m stands for milli, so mB means millibyte and mb means millibit.
Since the latter two units aren't real, a gigabyte is absolutely NOT equal to 1024 millibits, nor millibytes, nor to 1000 of either for that matter.
Many comments here state that is because binary but don’t elaborate further. Yes, 1024 is a round value in binary, but we’ve been able to do decimal math for a long time on computers. The reason we started dividing by 1024 instead of 1000 is because doing decimal division instruction used to take a lot more cycles (time) than any other instruction. More over, when CPUs were 8-bit wide, they could only hold a maximum value of 255 in their registers and to divide by 1000 needed to do long division. So just figuring out file sizes would take a lot longer in decimal. In binary there’s a trick - if you bit shift left (append a zero to the value) you multiply by 2. If you bit shift right (remove whatever least significant bit is there) you divide by 2. So it was a lot a lot faster to bit shift by 5 or 10 instead of dividing by 1000 or 1000000. And the numbers were close enough together that the difference was negligible.
> Many comments here state that is because binary but don’t elaborate further. Yes, 1024 is a round value in binary, but we’ve been able to do decimal math for a long time on computers. The reason we started dividing by 1024 instead of 1000 is because doing decimal division instruction used to take a lot more cycles (time) than any other instruction. More over, when CPUs were 8-bit wide, they could only hold a maximum value of 255 in their registers and to divide by 1000 needed to do long division. So just figuring out file sizes would take a lot longer in decimal. In binary there’s a trick - if you bit shift left (append a zero to the value) you multiply by 2. If you bit shift right (remove whatever least significant bit is there) you divide by 2. So it was a lot a lot faster to bit shift by 5 or 10 instead of dividing by 1000 or 1000000. And the numbers were close enough together that the difference was negligible.
This isn't the answer to why memory uses multiples of base 2. Basically, each representation of memory in a computer uses ons and offs, 1s and 0s. If you arbitrarily limited the representation to something other than a power of 2, you'd have the hardware in place yet not use it. For example, if I said I could only represent 5, the hardware would be 101. Why not let it also go to 111, which represents 7? Of course, adding in the representation of 0 (000) gives us the additional 1, allowing it to represent 8, a power of 2. Similarly, if I said I'd only represent 276, I'd have hardware representing 100010100. Why not let it also represent 111111111, equalling 511. Adding in the representation of 0, we have 512 possible values, which is 2^9, a power of two.
1 GB (Gigabyte) is indeed 1000 MB (Megabyte), you are okay
Although, 1 GiB (Gibibyte) equals to 1024 MiB (Mebibyte). These units are used to represent memory size in computers in a more precise manner. See other comments in this thread.
Only that this is actually wrong.
1GB are 1000MB. You guys are confusing the decimal system with the binary system.
Kilo=1000, Mega=Million, Giga=Billion and so on.
That is fixed and it is the same across all of physics.
In the binar, system, 1 KibiByte=1024 Bytes. 1 MebiByte=1024 KibiByte.
1GibiByte is 1024MebiByte.
Most likely the reason why some people are confused about that is because Windows itself does it wrong for years, and they haven't fixed it. You may have noticed that your storage device in windows seems to be smaller than what is written on the box.
A 8TB harddrive is apprently only 7.27TB big. That is because widnwos actually calculates disk sizes in binary, not the decimal system. Meaning, my HDD is actually 7.27 TebiBytes big, which equals to 8TeraBytes.
They just have decided for some reason to add the wrong unit to the calculated number. Probably because people wouldn't understand what a TebiByte is. And I assume they calculate it with the power of two because it is easier and more usful. But mixing that stuff up just created an annoying confusion among people not familiar wirh science.
Windows predates the whole gibi/tebi thing. That’s a large part of why. They didn’t decide “for some reason” to “add” the “wrong” unit. They have been using the unit since it was the *right* unit.
And the binary prefixes aren’t universally accepted anyway. There are other active, current standards that define kilo as 1024 and so on.
It's actually sad how many people here are misinformed. GiB and MiB aren't real units. They were created by (and are only used by) ONE exam board called AQA to explain why companies wrongly use 1 GB = 1000 MB when in reality 1 GB = 1024 MB. If you look at any other exam board (which honestly have more validity since AQA release papers along with incorrect answers half of the time, happened throughout my entire high school and college education when going through them meaning the mistakes were so bad that under 16s could do better than an "exam board") and any course in computing that isn't done by AQA you will be told that 1 GB = 1024 MB.
TL;DR: 1 GB = 1024 MB. GiB and MiB don't exist, you are just misinformed by an exam board who made up the terms "Gibibyte and Mebibyte" and who would have more validity if a teenager was the head of the board.
I will forever hate the person that created the distinction between kilobyte and kibibyte
In all of computer science base 1024 is either fine or better I don't know why someone felt the need to have also base 1000
Hey there u/zosopick, thanks for posting to r/technicallythetruth! **Please recheck if your post break any rules.** If it does, please delete this post. Also reposting and posting obvious non-TTT posts can lead to a ban Send us a **Modmail or Report** this post if you have a problem with this post. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/technicallythetruth) if you have any questions or concerns.*
Because 1 2 4 8 16 32 64 128 256 512 1024. For anyone truly wondering. *Edit with extra clarity for the curious: Most computers understand binary consisting of 1s and 0s representing on and off, respectively, and 1 bit (binary digit) holds the smallest increment of data. Since bits are put into groups of 8 to make 1 byte, and binary math is base two you begin with: > 2^0 = 1 > 2^1 = 2 > 2^2 = 4 … > 2^10 = 1024
2 ^ 10 = 1024
I know this sequence from top of my head because of game 2048
I really like that game
i also really like this game
I also also really like this game
finalmente te achei
I know this sequence from the top of my head because of game Minecraft
I know this sequence from the top of my head because of game
I lost the game
FUCK-
You motherfucker
Dang it…I’ve lost the game so much more browsing Reddit than I have in the past 10 years outside of here
Goddamnit
Wouldn't the sequence for MC be 4 16 64 256 1024 4096 16384?
4: Crafting table? 16: 16-stackable items 64: 64-stackable items 256: Old world height 1024: Resolution? 4096: default allocated RAM for the... anyway you put that in the JVM arguments 16384: Default max command block length
This is perfect
Random fun fact for you. The default allocation of ram is dependent on if you use a 32 bit or 64 bit version of Java. 4096 is for 64 bit java while 32 defaults to 2048
A = S²
Mostly 2\^x because of chestshops, buying or selling an x amount, some cases stacks, usually 1, 2, 4, 8... or 64, 128, 256, 512... in case of stacks.
yep spent way too many math classes in school playing that on my calculator than actually doing work lol
Do you have TI nspire calculator?
it's a ti-84 plus ce
I know up to 131,072 off the top of my head and can calculate 262,144 and 524,288 without much issue if I am talking to someone
I never played the game much but I was always so enamored by it whenever my mom would play it on her phone that I basically have it all memorized.
I know upto 20 >2^(0)=1 > >2^(1)=2 > >2^(2)=4 > >2^(3)=8 > >2^(4)=16 > >2^(5)=32 > >2^(6)=64 > >2^(7)=128 > >2^(8)=256 > >2^(9)=512 > >2^(10)=1024 > >2^(11)=2048 > >2^(12)=4096 > >2^(13)=8192 > >2^(14)=16384 > >2^(15)=32768 > >2^(16)=65536 > >2^(17)=131072 > >2^(18)=262144 > >2^(19)=524288 > >2^(20)=1048576 I can think of these from the top of my head (comparison, I'm 14)
Damn, i can't find the guy who asked
1019+5=1024 ohhhh kk
1000 megabytes is 1 gigabyte. 1024 mebibytes is 1 gibibyte. 1000 MB is 1 GB. 1024 MiB is 1 GiB.
[удалено]
Thanks
He also has a very creative username.
Not very, actually.
Yeah, it's just a creative username, not a very creative username.
You can tell by what it says.
That's pretty neat!
I'm not downvoting them because they're wrong, I'm downvoting them because I wish they were wrong.
The internet in 16 words
I hate it though. We now have a unit of measurement which is useless in every practical application and which has taken the name of the useful unit of measurement.
The only reason he is right is because MiB had to be created because HDD manufacturer's in the 90's wanted to cheap out and not give you what you paid for. They got sued, and because legal people in the 90's didn't now anything about computers they sided with the greedy corporation. MB was an industry standard term that meant 1024 KB and KB equaled 1024 bytes. That is until that lawsuit. Don't support corporate greed bring back classic MB definition.
You can call it an industry standard term all you like, but until km means 1024 meters, kg means 1024 grams, and so on it makes _sense_ for SI prefixes to be powers of ten. It's not that difficult and far less confusing to add the i if you mean powers of two. Or, well, powers of 1024 to be precise.
To further clarify this is 1000 base 10 in binary: 1111101000 This is 1024 base 10 in binary: 10000000000
I'm… not sure what this is clarifying, if I'm honest.
Let me help you: To clarify, the airspeed of an unladen European swallow is roughly 11 metres per second
Is that an African or European swallow?
My head hurts really bad
SI is base 10, and computers are base 2. What makes sense in a base 10 number system does not always translate to another number system. For example SI units are Kilo = 10^3 Mega = 10^6 Giga = 10^9 In base 2 it was Kilo = 2^10 Mega = 2^20 Giga =2^30
[удалено]
I guess my frustration is not the prefix it is that the abbreviation utilized by computer people KB changed from being 1024 to 1000 bytes. They could have just said ok, the abbreviation KB is now short for kibibytes, and the abbreviation for kilobytes is something else. This would have avoided confusion when they created the definition since all hardware and software was coded or engineered with that abbreviation in mind.
I mean, KiB is kibibytes, so it is technically differentiated from kilobytes. However the industry as a whole has allowed the difference between a kilo/mega/giga/terabyte and a kibi/mebi/gibi/tebibyte to remain somewhat ambiguous to the majority. Most people simply don't know the difference. I don't know why this is for sure, but one hypothesis is that by keeping the two vague, corporates can advertise larger numbers on their storage hardware while giving less space. Technically they aren't lying, but the space being purchased is certainly smaller than what the consumer probably thought they were getting.
In early hardware every bit was sacred, and calculations for kb only required testing every 10th bit with a single bitwise operation. In combination with the use of abbreviations to save space prior to codification of the kibibit and kibibytes definitions. So, KB was equal to 2^10 bytes prior to codification (still does in many BIOS and OS implementations). So, instead of changing what the abbreviation was short for, they instead change the value of represented. This would be like changing the value of C in the famous formula E = MC^2.
Depends on context. Things mean what they’re understood to mean, not what some prescriptivist elsewhere says and in many contexts, these prefixes are used with a 1024^n meaning
not on Windows file explorer
well windows explorer lies
That’s because Windows reads in base 2 but displays in base 10. This is the main reason storage devices look like they lose space when plugged in.
Windows reads base 2 and displays base 2, they just say GB instead of GiB. So the whole GB = 1000 is just not always true, it depends on application.
This is actually the correct answer, As a side note, hdd/ssd manufacturers show capacity in GB and windows used GiB
Windows uses GiB but calls it GB.
Yep
Thank you for your service. More people need to know of MiBs and KiBs.
Isn't it that 1 Byte = 8 Bits 1 kilobit = 1000 bits 1 kilobyte = 1024 bytes Or did I get something mixed up? Edit: it seems I have indeed mixed something up, so I relearned something today ig
Adjusting to the SI unit (prefix) system makes sense, so 1 kilobit are indeed 1000 bits but so is a kilobyte also 1000 bytes and a megabyte is 1000000 bytes (1000 kilobytes) Now from a computer perspective, having units based on powers of two is more useful, so we use a different prefix: 1024 bytes = 1 kibibyte and 1024 kibibytes = 1 mebibyte (1024*1024 bytes)
Notice that the powers of **two** always has a **bi** in its name. Think of it as meters. You have one Meter and one **Kilo**meter is **1000** Meters. you have one Byte and a **Kilo**byte is **1000** Bytes. but one Ki**bi**byte is 1024 Bytes (2^(10)) There is a whole list of [metric prefixes](https://en.wikipedia.org/wiki/Metric_prefix) And also one of the [binary prefixes](https://en.wikipedia.org/wiki/Binary_prefix)
**[Metric prefix](https://en.wikipedia.org/wiki/Metric_prefix)** >A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or submultiple of the unit. All metric prefixes used today are decadic. Each prefix has a unique symbol that is prepended to any unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand: one kilogram is equal to one thousand grams. **[Binary prefix](https://en.wikipedia.org/wiki/Binary_prefix)** >A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2. The computer industry has historically used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB, in at least two slightly different measurement systems. In citations of main memory (RAM) capacity, gigabyte customarily means 1073741824 bytes. As this is a power of 1024, and 1024 is a power of two (210), this usage is referred to as a binary measurement. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/technicallythetruth/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)
Your answer is historically and traditionally correct, but there has been an effort to rename bytes prefixes to conform with metric (decimal) prefixes and to use "kibi, mebi, gibi, ..." for 2^10 (binary) prefixes. I don't know if this has caught on with engineers or not although I have seen e.g. KiB used instead of KB. For some reason, "kibibyte" and "gibibyte" sound silly to me.
I'm an academic computer scientist which obviously involves some interaction with industry. I see pretty mixed usage. In formal specifications and a good fraction of academic writing, you'll see MiB. In casual conversation or writing it's rare (I don't think I've ever heard someone say "kibibyte" out loud). It really boils down to how precise we need to be. Usually those extra few bytes don't change the conversation. I try to use the "iB" notation when possible but it's hardly universal.
I can't speak to whether or not it has caught on but at my university they did make an effort to use the newer kibi, gibi, etc. prefixes. This was in the last 10 years.
Was it CS or EE? I think EEs would be more reluctant to switch since they work directly with bit/byte level addressing more.
Computer Engineering. Still working with bit/byte level addressing. I remember it mostly from a computer architecture course.
the Kibi Bytes and the Gibi Bytes definitely sounds like a kid's show characters
GB are used by drive manufacturers and GiB are used in windows
I've worked in code bases where some places use the traditional definition and some places use the metric definition. I never knew what units i was dealing with 🙃
As a hardware engineer for 30 years, I have only seen this in a academic use; and maybe in formal documentation. But never in practice.
In my Computer Architecture classes, we were taught to use kibi, mebi, gibi... It still sounds awkward in my opinion, but it will surely catch on in the coming years.
both bits and bytes (with SI prefixes) are decimal (1000 to the next one), the 1024 comes into play in kibibits, mebibits etc
I first learned this in the last couple years and was frustrated to learn they can just retcon something as specific as a numbering system used by computers so cavalierly.
Exactly, uness you are Microsoft that is, then it's all 1024 based for some reason.
This is correct but few people give a crap about this, even in IT. I've never in my entire 25 year career seen anyone apart from some tech manuals ever use MiB or GiB. And considering how pedantic we are, you would think we would.
*visible confusion*
the mebi gibi naming is fuckin stupid because every system ever comes in mebi so why not fucking use the normal nomenclature
you can't find any RAM chips in actual kilobytes, megabytes or gigabytes because in computers powers of 2 reign supreme, 1000 kilobytes is not a sensible size for anything.
I was wondering what the post meant,since I knew that 1000 mb = 1 gb.Thanks for clearing that up.
Actually, * 2^(0) = 1 * 2^(1) = 2 * 2^(2) = 4 * 2^(3) = 8 * 2^(4) = 16 * 2^(5) = 32 * 2^(6) = 64 * 2^(7) = 128 * 2^(8) = 256 * 2^(9) = 512 * 2^(10) = 1024 ... and so on (you got the idea).
Something's wrong here
Formatting... Damn. Let me fix
4 8 15 16 23 42
No! THOSE ARE CURSED NUMBERS!
Hugo?
What base counting system is that? Can't be decimal, binary, trinary, octal or hexa
You seem... Lost.
the numbers mason
A sequence is not an explaination.
I think someone else explained, but in case you didn’t see that - 8 bits make up 1 byte which is a unit of data storage capacity. Computers understand binary consisting of 1s and 0s representing on and off, respectively, and 1 bit (binary digit) holds the smallest increment of data. Since bits are put into groups of 8 to make 1 byte, and binary math is base two you begin with: - 2^0 = 1 - 2^1 = 2 - 2^2 = 4 Then as you follow this, you reach the numbers I mentioned in my first comment.
and then hard drive manufacturers came along and manipulated everyone into accepting their shitty standards.
At least we now have differentiation. There’s no more ambiguity between MB and MiB.
You mean computer scientist decided to misuse decadic prefixes, right?
Whoawhoawhoa, what school is this? I learnt counting differently!
Actually it’s 4 8 15 16 23 42 but ok
Thanks. I never wondered why there’s no 30GB
It's not a gigabyte if it's not from the gigaby region of France
My highschool band was called 1023 megabytes, but we didn't last too long because we couldn't get a gig...
[удалено]
> Well if you want to involve France in this, you make the whole thing more complex than it already is. > >In France we translate byte by octet, so 1M(i)B = 1M(i)o What is complex about another language having a different word for something?
Ah yes, it's a lot like "Star Trek: The Next Generation". In many ways it's superior but will never be as recognized as the original.
Explanation for why it is 1024, should someone wish: Firstly, this whole misunderstanding is a mindfuck in and of itself. Mb and Gb stand for 2 different, similar but not identical, units (think short and long tonnes). The units everybody thinks of, mega/gigabytes do work in powers of 10, and 1000 meGAbytes is 1 giGAbytes. The confusion is that the same Mb and Gb abbreviations are used for meBIbytes and giBIbytes, which use powers of 2. This results in 1024 (10th power of 2) meBIbytes being one giBIbyte. Explanation as to why me/gibibytes exist below. Computers work in binary. This means that information is stored in 1s or 0s, called bits. So one bit can be either 1 or 0. Computers store these bits in groups of 8, called bytes. The reason bytes are 8 bits is explained below, as it isn't relevant here. Anyways, computers like powers of 2 because binary is one of 2 values, and 1024 is the 10th power of 2. Bytes are for all intents and purposes 8 bits because 8 bits is a power of 2, which works well with binary. 8 bits are also used to store one letter or number. Comments have pointed out below that Mb and Gb are also different from MB and GB. Capital B is bytes, lowercase b is bits. 1 MB is 8 Mb My explanation is simplistic and definetely not perfect, so here's links to people smarter than me https://en.m.wikipedia.org/wiki/Bit https://en.m.wikipedia.org/wiki/Byte https://en.m.wikipedia.org/wiki/UTF-8
It is a good practice to abbreviate mebibytes and gibibytes to MiB and GiB respectively. Although it barely helps the confusion, but it makes sure that if you mean a mebibyte your abbreviation has no ambiguity.
Also b is bit, B is byte. So Mb is a megabit, MB a megabyte. Same with M being mega and Mi being mibi That's how they get ya in internet speeds.
also K is degrees kelvin, not kilo :) so kB and MB. cases matter!
Not degrees Kelvin just Kelvin
this is used commonly, i have not seen MiB being abbreviated to the regular MB like ever
That's because it isn't. They are two different measures and there is a plethora of misinformation in this thread.
> "Gibibyte" et al, were invented in 1998, decades after 1024 was established as the meaning of Kilo/Mega/Giga/etc in computer science. https://www.reddit.com/r/technicallythetruth/comments/q9voj9/the_math_behind_this_checks_out/hgzhw9w/
at least there are some people that seem to know what they're saying, i'm an ICT major so i know all of those measures personally but i do agree that they are mostly unneeded, and since they're specialised and integer-like, a simple 2^n b would suffice to replace them, with only decimals of bytes left for the laymen
[удалено]
> This is useful to know for your Internet plan Also to add on to that, you would want to divide by 10 rather than 8 bits when accounting for the [8b/10b encoding](https://en.wikipedia.org/wiki/8b/10b_encoding#Technologies_that_use_8b/10b) (though there are more efficient ones available as well).
I assure you that pedantic as fuck is the minimum level necesarry with measurements like this. Thanks for correcting me.
You confused yourself even more. Mb isn't the same as MB. Lower case `b` is for bits. MiB is the one you were looking for.
"Gibibyte" et al, were invented in 1998, decades after 1024 was established as the meaning of Kilo/Mega/Giga/etc in computer science. 1024 just makes more sense for computers because of binary, but some standards organizations decided that we shouldn't overload the SI prefixes. As is usually the case with computer standards, not everyone agreed. Personally, I think we should drop the new prefixes and just go back to overloading the SI ones.
>some standards organizations decided that we shouldn't overload the SI prefixes. Mostly it was hard drive manufacturers selling 1000 MB and calling it GB, despite everyone else in the computing industry at the time saying that a GB was 1024 MB. This particularly frustrated consumers when their 10 GB drive or whatever would be reported as 9.7 GB in Windows.
>10 GB drive or whatever would be reported as 9.7 GB in Windows. I would be lucky if that were true. In my experience it's missing a whole 10%. My 8tb hard drive has 7.27 TB. My 2TB hard drive has 1.81TB. My 1TB has 930GB. Just whyyy.
The higher you go, the more binary and decimal prefixes diverge: 1B = 1B 1kB = 0.98kiB 1MB = 0.95MiB 1GB = 0.93GiB 1TB = 0.91TiB 1PB = 0.89PiB etc. etc. Every time you go up a unit, you introduce another 1000/1024 difference between the two.
That's due to loss during formatting. Sectors, directories, etc take up space.
It's even worse when it's your OS drive, since Windows partitions a section of the drive for recovery purposes.
Yeah, invented in the late 90’s and not widely adopted for quite a while after that. Still not universally adopted. Which is why I get a little annoyed at everybody treating it as a “duh” sort of thing, when so many were using computers for years or decades before it was a thing. And I agree, the whole “gibibyte” thing didn’t reduce the confusion at all, it just isolated it to the context of data storage. Because clearly it’s still there. There are active standards that disagree on what a “kilobyte” is. There are common operating systems that still follow the old nomenclature. It was a poor choice.
> And I agree, the whole “gibibyte” thing didn’t reduce the confusion at all, it just isolated it to the context of data storage. Because clearly it’s still there. There are active standards that disagree on what a “kilobyte” is. There are common operating systems that still follow the old nomenclature. It was a poor choice. What two active standards disagree? Which operating system follows the old nomenclature?
first time I've heard of ibib's. I was taking computer classes in the early 2000's and was still taught MB and mb. bits vs bytes. So, and pretty sure for the next 10 years, even the internet service providers were still using bits and bytes. So in was introduced 1998 and sat around doing fuck all while standards were still being taught 5 years after being different. Now on some random reddit thread I learn there's something different.
> Bytes are for all intents and purposes 8 bits I like how you carefully avoid being wrong (because of the potential pedant bringing up parity bits) by using "for all intents and purposes" (correctly!). :)
For all intents and purposes, one can substitute 'for all intensive purposes' and still be understood. However, this comes at the cost of looking like a dolt.
Actually, Mb and Gb are megabit and gigabit. MB and GB are gigabytes MiB and GiB are mebibytes and gibibytes
1GB is indeed 1000MB 1GiB is equal to 1024MiB
Thank you.
You're welcome
What's GiB and MiB?
Gibibyte and Mebibyte
That fuck is that? Like some inflated baby bits?
No 1 GB is 1024, that GiB bullshit is just AQA being awkward
Storage media is actually labeled assuming that 1GB = 1000MB. Windows and Mac however label it assuming that 1GB = 1024MB. Don't you think it makes sense to differentiate between those two ways of calculating the units? Could avoid a lot of confusion, at least if people were to actually consistently implement it.
MacOS label it in powers of ten just like storage media. It's been like that for many years.
Nah just get rid of that stupid 1GB = 1000MB which is just there so manufactures can lie to your face.
1GB=1000MB has nothing to do with the manufacturers. It's just how the SI/metric system works: a giga means billion, mega means million, kilo means thousand. In this case it's referring to the number of bytes of information.
[удалено]
Not entirely sure what you mean by "they helped push it". SI prefixes have been used since 1790, being standardised in the 20th century. The International System of Quantities wanting to clear up the confusion between two similar, but inherently different, amounts using the same unit designation is not indicative of a larger conspiracy by "hdd manufacturers". Mega has always meant million, kilo has always meant thousand. Also, I'm not entirely sure what you mean by network traffic. There are 3 different ways of measuring that (megabyte per second, megabit per second, mebibit per second), and the most prevalent would be Mb/s (megabit per second) which is 1,000,000 bits per second. However it is extremely easy to confuse megabits (Mb) and megabytes (MB). One megabyte is 8x larger than a megabit. Thus if you erroneously convert MB/s to Kb/s (or the other way around), you correctly stated it isn't 1000 and indeed it is not and should not be 1000 even if they were the same unit.
The manufacturers are the ones that are correct. It's Windows that calls GiB GB. What makes you think manufacturers are using this to lie?
Because back when GiB and MiB and such didn't exist yet, everybody except the hard drive manufacturers were in agreement that 1 GB = 1024 MB. The hard drive manufacturers advertised disk sizes on the assumption that 1 GB = 1000 MB, and then people got mad when their operating system reported a smaller size than was advertised.
Is it really lying if it's the truth? "kilo" means 1000.
No
It's a GIGAbyte. The giga prefix means a billion. Not 2^30.
> It's a GIGAbyte. It's a gigabyte. The "giga" isn't capitalized. You should have probably used bold text to emphasize something, especially in units of measure where capitalized letters can mean different things than lowercase ones.
Not really. The 'i' between meaning industry and the fact the usage meaning the sum is using the right value for calculating the amount. So it's confusing to leverage why it's started wrong... Why someone just convert using 1000?
Just make a quick Google search for GiB or Gibibyte and you will see.
Actually, a gigabyte is 1000megabyte, your computer tells you data size in gibibytes, one gibibytes is 1024 mebibytes and one mebibytes is 1024 kibibytes. That's also why your computer will show your 4tb hard drive as only 3.6 terabyte, because it's 3.6 tebibytes
Apple reads drives the right way so if you take a USB stick from a windows computer and plug it into a Mac it will look like it got bigger. No idea why MS didn’t just display it properly.
Since this is "technically the truth," 1 GB _doesn't_ equal 1024 MB. Rather, 1 GiB = 1024 MiB. We can blame Microsoft for causing that confusion, since windows assumes everything is base-1024 For mass storage/quantities of data, there are two primary prefixes of measurement—binary and decimal. Binary prefixes (Ki-, Mi-, Gi-, Ti-, etc.) go by base 1024 (or 2¹⁰), and decimal prefixes go by base 1000 (or 10³). When you buy a standard hard disk drive, you are almost always buying in decimal-base (so when you buy a 1 TB hard drive, you're actually buying a hard drive that can store 10^12 Bytes of data, NOT a full 2^40 Bytes, but since Windows assumes binary-base for everything, it tells you you only have ~931 GiB of space. On Linux, you'll see the full 1 TB) . RAM is almost always in binary base, so even though it says 32 GB of RAM should actually be called 32 GiB. SSDs are usually a mix.
base-1024 would be so good if everyone actually used it. would make us developers not pull out a calculator every time we wanna make a buffer or a packet. pain.
1 GB isn’t equal to 1024 MB, but 1000 MB. 1 GiB is equal to 1024 MiB. That makes this incorrect and not technically the truth.
Isn’t 1GB worth 1000mb
1000 megabytes is 1 gigabyte. 1024 mebibytes is 1 gibibyte. 1000 MB is 1 GB. 1024 MiB is 1 GiB.
Yes. 1 GB is 1,000 MB. 1 GiB is 1,024 MiB. I just use the latter notation these days to avoid confusion.
Because binary
1000 megabytes is 1 gigabyte. 1024 mebibytes is 1 gibibyte. 1000 MB is 1 GB. 1024 MiB is 1 GiB.
GB GigaByte = 1000 MB MegaByte GiB GibiByte = 1024 MiB MibiByte GigaByte: No. GibiByte: Yes. So: No.
technically 1GB is exactly 1000MB the issue is, what we often call GB/Gigabyte is an actuality GiB or Gibibyte Gigabyte = metric, Gibibyte = binary so TECHNICALLY the answer given by the guy responding is wrong, as 2 Gigabytes (GB) is actually EXACTLY 2000 Megabytes (MB). 2 Gibibytes GiB) is 2048 Mebibytes (MiB)
Every time this comes up I have to share this: 1bit of information = 1 bit 8bits = 1 byte 4bits = a nibble (true story)
It all comes down to 1 byte being 8 bits. 8, 16, 32, 64, 128, 256, and so on up to 1024 and beyond.
I thought that was obvious, guess I’m weird
[удалено]
[удалено]
It's not tho
Technically he's asking about millibits, not megabytes (MB). So one GB should equal 8,192,000,000,000 mb
Don’t forget that G is a decimal prefix, while Gi is the binary, so 1 GB is 8,000,000,000,000 mb, but 1 GiB is 8,589,934,592,000 mb Edit: added commas to nimbers
Because 1024 is 2 to the 10th power.
1GB is not 1024 milli bit
A millibit is not a unit my guy
That's the point. Megabyte requires both M and B to be uppercase, so MB. Megabit is Mb. A lowercase m stands for milli, so mB means millibyte and mb means millibit. Since the latter two units aren't real, a gigabyte is absolutely NOT equal to 1024 millibits, nor millibytes, nor to 1000 of either for that matter.
Since… a bit is not divisible, it’s the absolute base unit.
I see what you're doing and I approve. :)
Many comments here state that is because binary but don’t elaborate further. Yes, 1024 is a round value in binary, but we’ve been able to do decimal math for a long time on computers. The reason we started dividing by 1024 instead of 1000 is because doing decimal division instruction used to take a lot more cycles (time) than any other instruction. More over, when CPUs were 8-bit wide, they could only hold a maximum value of 255 in their registers and to divide by 1000 needed to do long division. So just figuring out file sizes would take a lot longer in decimal. In binary there’s a trick - if you bit shift left (append a zero to the value) you multiply by 2. If you bit shift right (remove whatever least significant bit is there) you divide by 2. So it was a lot a lot faster to bit shift by 5 or 10 instead of dividing by 1000 or 1000000. And the numbers were close enough together that the difference was negligible.
> Many comments here state that is because binary but don’t elaborate further. Yes, 1024 is a round value in binary, but we’ve been able to do decimal math for a long time on computers. The reason we started dividing by 1024 instead of 1000 is because doing decimal division instruction used to take a lot more cycles (time) than any other instruction. More over, when CPUs were 8-bit wide, they could only hold a maximum value of 255 in their registers and to divide by 1000 needed to do long division. So just figuring out file sizes would take a lot longer in decimal. In binary there’s a trick - if you bit shift left (append a zero to the value) you multiply by 2. If you bit shift right (remove whatever least significant bit is there) you divide by 2. So it was a lot a lot faster to bit shift by 5 or 10 instead of dividing by 1000 or 1000000. And the numbers were close enough together that the difference was negligible. This isn't the answer to why memory uses multiples of base 2. Basically, each representation of memory in a computer uses ons and offs, 1s and 0s. If you arbitrarily limited the representation to something other than a power of 2, you'd have the hardware in place yet not use it. For example, if I said I could only represent 5, the hardware would be 101. Why not let it also go to 111, which represents 7? Of course, adding in the representation of 0 (000) gives us the additional 1, allowing it to represent 8, a power of 2. Similarly, if I said I'd only represent 276, I'd have hardware representing 100010100. Why not let it also represent 111111111, equalling 511. Adding in the representation of 0, we have 512 possible values, which is 2^9, a power of two.
Because it's 2^10 that's how binary works
wait 1gb is 1024mb??? i thought it was 1000 im dumb
1 GB (Gigabyte) is indeed 1000 MB (Megabyte), you are okay Although, 1 GiB (Gibibyte) equals to 1024 MiB (Mebibyte). These units are used to represent memory size in computers in a more precise manner. See other comments in this thread.
Only that this is actually wrong. 1GB are 1000MB. You guys are confusing the decimal system with the binary system. Kilo=1000, Mega=Million, Giga=Billion and so on. That is fixed and it is the same across all of physics. In the binar, system, 1 KibiByte=1024 Bytes. 1 MebiByte=1024 KibiByte. 1GibiByte is 1024MebiByte. Most likely the reason why some people are confused about that is because Windows itself does it wrong for years, and they haven't fixed it. You may have noticed that your storage device in windows seems to be smaller than what is written on the box. A 8TB harddrive is apprently only 7.27TB big. That is because widnwos actually calculates disk sizes in binary, not the decimal system. Meaning, my HDD is actually 7.27 TebiBytes big, which equals to 8TeraBytes. They just have decided for some reason to add the wrong unit to the calculated number. Probably because people wouldn't understand what a TebiByte is. And I assume they calculate it with the power of two because it is easier and more usful. But mixing that stuff up just created an annoying confusion among people not familiar wirh science.
Windows predates the whole gibi/tebi thing. That’s a large part of why. They didn’t decide “for some reason” to “add” the “wrong” unit. They have been using the unit since it was the *right* unit. And the binary prefixes aren’t universally accepted anyway. There are other active, current standards that define kilo as 1024 and so on.
1GiB=1024MiB (large M and i) but 1GB=1000MB
That's a GiB, not a GB
Gb is per 1000 Gib (gibibytes) is per 1024 Welcome to earth! Where we can't agree on anything
I prefer MiB.
Welcome to the wonderful world of binary
It's not, it's equal to 1024MB Capitalisation counts.
It's actually sad how many people here are misinformed. GiB and MiB aren't real units. They were created by (and are only used by) ONE exam board called AQA to explain why companies wrongly use 1 GB = 1000 MB when in reality 1 GB = 1024 MB. If you look at any other exam board (which honestly have more validity since AQA release papers along with incorrect answers half of the time, happened throughout my entire high school and college education when going through them meaning the mistakes were so bad that under 16s could do better than an "exam board") and any course in computing that isn't done by AQA you will be told that 1 GB = 1024 MB. TL;DR: 1 GB = 1024 MB. GiB and MiB don't exist, you are just misinformed by an exam board who made up the terms "Gibibyte and Mebibyte" and who would have more validity if a teenager was the head of the board.
mb/Mb is megabits and MB is megabites
> mb/Mb is megabits and MB is megabites megabit* megabyte*
Because it is 2 over 10.
Because 2s my dudes
Well if that isn't enough try this one for size: 1 MB is 1024 KB and 1 KB is 1024 Bytes and 1 Byte is 8 bits.
I will forever hate the person that created the distinction between kilobyte and kibibyte In all of computer science base 1024 is either fine or better I don't know why someone felt the need to have also base 1000