Error exponentIn information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large .
Entropie de RényiL'entropie de Rényi, due à Alfréd Rényi, est une fonction mathématique qui correspond à la quantité d'information contenue dans la probabilité de collision d'une variable aléatoire. Étant donnés une variable aléatoire discrète à valeurs possibles , ainsi qu'un paramètre réel strictement positif et différent de 1, l' entropie de Rényi d'ordre de est définie par la formule : L'entropie de Rényi généralise d'autres acceptions de la notion d'entropie, qui correspondent chacune à des valeurs particulières de .
Code universelEn compression de données, un code universel est un code préfixe dont les mots ont une longueur dont l'espérance mathématique ne dépasse pas celle de la longueur des mots du code optimal à un facteur constant près. Les codages gamma, delta et omega d'Elias, les codages Zeta, de Fibonacci, de Levenshtein, d'Even-Rodeh produisent des codes préfixes et universels. Les codages unaire, de Rice et de Golomb produisent des codes préfixes non universels. Codage entropique Catégorie:Codage entropique Catégorie:Théo
Typical setIn information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
Entropy rateIn the mathematical theory of probability, the entropy rate or source information rate of a stochastic process is, informally, the time density of the average information in a stochastic process. For stochastic processes with a countable index, the entropy rate is the limit of the joint entropy of members of the process divided by , as tends to infinity: when the limit exists. An alternative, related quantity is: For strongly stationary stochastic processes, .
Transmission systemSee Transmission (mechanics) for a car's transmission system In telecommunications, a transmission system is a system that transmits a signal from one place to another. The signal can be an electrical, optical or radio signal. The goal of a transmission system is to transmit data accurately and efficiently from point A to point B over a distance, using a variety of technologies such as copper cable and fiber optic cables, satellite links, and wireless communication technologies.
Hamming spaceIn statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all binary strings of length N. It is used in the theory of coding signals and transmission. More generally, a Hamming space can be defined over any alphabet (set) Q as the set of words of a fixed length N with letters from Q. If Q is a finite field, then a Hamming space over Q is an N-dimensional vector space over Q. In the typical, binary case, the field is thus GF(2) (also denoted by Z2).
Asymptotic equipartition propertyIn information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression. Roughly speaking, the theorem states that although there are many series of results that may be produced by a random process, the one actually produced is most probably from a loosely defined set of outcomes that all have approximately the same chance of being the one actually realized.
Minimum message lengthMinimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal information theory restatement of Occam's Razor: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct (where the explanation consists of the statement of the model, followed by the lossless encoding of the data using the stated model).