In mathematics, the binary logarithm (log2n) is the power to which the number 2 must be raised to obtain the value n. That is, for any real number x,
For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, and the binary logarithm of 32 is 5.
The binary logarithm is the logarithm to the base 2 and is the inverse function of the power of two function. As well as log2, an alternative notation for the binary logarithm is lb (the notation preferred by ISO 31-11 and ISO 80000-2).
Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas
in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.
Binary logarithms are included in the standard C mathematical functions and other mathematical software packages.
The integer part of a binary logarithm can be found using the find first set operation on an integer value, or by looking up the exponent of a floating point value.
The fractional part of the logarithm can be calculated efficiently.
History of logarithms
The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements, Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem, on the structure of even perfect numbers).
And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two.
On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Ce cours donne les connaissances fondamentales liées aux fonctions trigonométriques, logarithmiques et exponentielles. La présentation des concepts et des propositions est soutenue par une grande gamm
Ce cours donne les connaissances fondamentales liées aux fonctions trigonométriques, logarithmiques et exponentielles. La présentation des concepts et des propositions est soutenue par une grande gamm
Indian mathematics emerged in the Indian subcontinent from 1200 BCE until the end of the 18th century. In the classical period of Indian mathematics (400 CE to 1200 CE), important contributions were made by scholars like Aryabhata, Brahmagupta, Bhaskara II, and Varāhamihira. The decimal number system in use today was first recorded in Indian mathematics. Indian mathematicians made early contributions to the study of the concept of zero as a number, negative numbers, arithmetic, and algebra.
The natural unit of information (symbol: nat), sometimes also nit or nepit, is a unit of information or information entropy, based on natural logarithms and powers of e, rather than the powers of 2 and base 2 logarithms, which define the shannon. This unit is also known by its unit symbol, the nat. One nat is the information content of an event when the probability of that event occurring is 1/e. One nat is equal to 1/ln 2 shannons ≈ 1.44 Sh or, equivalently, 1/ln 10 hartleys ≈ 0.434 Hart.
The hartley (symbol Hart), also called a ban, or a dit (short for decimal digit), is a logarithmic unit that measures information or entropy, based on base 10 logarithms and powers of 10. One hartley is the information content of an event if the probability of that event occurring is . It is therefore equal to the information contained in one decimal digit (or dit), assuming a priori equiprobability of each possible value. It is named after Ralph Hartley.
We establish a Chung-type law of the iterated logarithm and the exact local and uniform moduli of continuity for a large class of anisotropic Gaussian random fields with a harmonizable-type integral representation and the property of strong local nondeterm ...
Recently, we have established and used the generalized Littlewood theorem concerning contour integrals of the logarithm of an analytical function to obtain a few new criteria equivalent to the Riemann hypothesis. Here, the same theorem is applied to calcul ...
We study the existence and propagation of singularities of the solution to a one-dimensional linear stochastic wave equation driven by an additive Gaussian noise that is white in time and colored in space. Our approach is based on a simultaneous law of the ...