What’s a bit?
We explain to you what a bit is, what its different uses are and the methods in which this computer unit can be calculated.
A bit is the minimum unit of information used by computing.
In computing, a value in the binary numbering system is called a bit (Binary digit). This system is so called because it comprises only two basic values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, and so on.
A bit is, then, the minimum unit of information used by computers, whose systems are all based on that binary code. Each bit of information represents a specific value: 1 or 0, but combining different bits you can get many more combinations, for example:
2-bit model (4 combinations):
00 – Both off
01 – First off, second on
10 – First on, second off
11 – Both on
With these two units we can represent four punctual values. Now suppose we have 8 bits (an octet), equivalent in some systems to a byte: 256 different values are obtained.
In this way, the binary system operates paying attention to the value of the bit (1 or 0) and its position in the represented string: if it is on and appears in a position to the left, its value is doubled, and if it appears to the right, it is cut in half. For example:
To represent the number 20 in binary
Net binary value: 10100
Numeric value per position:168421
Result:16 +0 +4 +0 + 0 = 20
Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:
Net binary value: 01011
Numeric value per position:4210,50,25
Result:0 +2 +0 +0.5 + 0.25 = 2.75
The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given on the basis of their position in the string, thus forming a representation mechanism that will then be applied to alphanumeric characters (called ASCII).
In this way, the operations of the microprocessors of the computers are recorded: there can be architectures of 4, 8, 16, 32 and 64 bits.
This means that the microprocessor handles that internal number of registers, that is, the calculation capacity of the Arithmetic-Logic Unit.
For example, the first computers in the x86 series (Intel 8086 and Intel 8088) had 16-bit processors, and the noticeable difference between their speeds had to do not so much with their processing power, but with the additional help of a 16-bit and 8-bit bus respectively.
Similarly, bits are used to measure the storage capacity of a digital memory.