What’s a byte?
We explain what a byte is, the origin of the term and what it is used for. In addition, some features and their scale of measurements.
A byte needs 8 bits to represent a letter in the binary code.
A byte is the basic unit of information used in computing and telecommunications, equivalent to an ordered and regular set of bits (binary code), generally stipulated in 8.
That is: 8 bits are equivalent to one byte, but this amount can be altered, so a byte is actually equivalent to n ordered bits.
This unit does not have a conventional symbol of representation, but in some countries the letter B is used.
The origin of this term is assumed in the English acronym Binary Tuple, which is equivalent to an ordered sequence of binary elements.
However, the phonetic similarity of byte to bite also meant its use since it was the minimum amount of data that could be fed to one system at a time (the minimum amount that could “bite”).
As for the amount of information that a byte represents, consider that it takes approximately 8 bits to represent a letter in the binary code of most commercial computing systems today, ie: a byte equals a letter, so an entire paragraph may exceed 100 B, and a very short text will reach the unit immediately above, the kilobyte (1024 B = 1 kB).
From then on a whole scale of measurement of digital information quantity is started, as follows (according to ISO/IEC 80000-13):
- 1024 B = 1 kB (one kilobyte, equivalent to a very short text)
- 1024 kB = 1 mB (one megabyte, equivalent to a complete novel)
- 1024 mB = 1 gB (one gigabyte, equivalent to an entire library shelf full of books)
- 1024 gB = 1 tB (one terabyte, equivalent to a small complete library)
- 1024 tB = 1 pB (one petabyte, equivalent to the amount of data handled by Google per hour in the world)
- 1024 pB = 1 eB (one exabyte, equivalent to the weight of all Internet information by the end of 2001).