Résumé
In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive. If , then and the entropy of (in shannons) is given by where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See binary logarithm. When , the binary entropy function attains its maximum value. This is the case of an unbiased coin flip. is distinguished from the entropy function in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variable as a parameter. Sometimes the binary entropy function is also written as . However, it is different from and should not be confused with the Rényi entropy, which is denoted as . In terms of information theory, entropy is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose . At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If , the result is again certain, so the entropy is 0 here as well. When , the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if , there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit. The derivative of the binary entropy function may be expressed as the negative of the logit function: The Taylor series of the binary entropy function in a neighborhood of 1/2 is for . The following bounds hold for : and where denotes natural logarithm.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Concepts associés (4)
Binary entropy function
In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive. If , then and the entropy of (in shannons) is given by where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2.
Épreuve de Bernoulli
vignette|Le pile ou face est un exemple d'épreuve de Bernouilli. En probabilité, une épreuve de Bernoulli de paramètre p (réel compris entre 0 et 1) est une expérience aléatoire (c'est-à-dire soumise au hasard) comportant deux issues, le succès ou l'échec. L'exemple typique est le lancer d'une pièce de monnaie possiblement pipée. On note alors p la probabilité d'obtenir pile (qui correspond disons à un succès) et 1-p d'obtenir face. Le réel p représente la probabilité d'un succès.
Entropie de Shannon
En théorie de l'information, l'entropie de Shannon, ou plus simplement entropie, est une fonction mathématique qui, intuitivement, correspond à la quantité d'information contenue ou délivrée par une source d'information. Cette source peut être un texte écrit dans une langue donnée, un signal électrique ou encore un fichier informatique quelconque (suite d'octets). Elle a été introduite par Claude Shannon. Du point de vue d'un récepteur, plus la source émet d'informations différentes, plus l'entropie (ou incertitude sur ce que la source émet) est grande.
Afficher plus
Cours associés (5)
CS-101: Advanced information, computation, communication I
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
BIO-369: Randomness and information in biological data
Biology is becoming more and more a data science, as illustrated by the explosion of available genome sequences. This course aims to show how we can make sense of such data and harness it in order to
COM-102: Advanced information, computation, communication II
Text, sound, and images are examples of information sources stored in our computers and/or communicated over the Internet. How do we measure, compress, and protect the informatin they contain?
Afficher plus