In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
If is a discrete random variable with distribution given by
then the entropy of is defined as
If is a continuous random variable with probability density , then the differential entropy of is defined as
The quantity is understood to be zero whenever .
This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms.
The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy.
The choice of the measure is however crucial in determining the entropy and the resulting maximum entropy distribution, even though the usual recourse to the Lebesgue measure is often defended as "natural".
Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
Ce cours présentera les bases de l'analyse des données et de l'apprentissage à partir des données, l'estimation des erreurs et la stochasticité en physique. Les concepts seront introduits théoriquemen
This course explores how to design reliable discriminative and generative neural networks, the ethics of data acquisition and model deployment, as well as modern multi-modal models.
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution.
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP).
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh (ˈreɪli). A Rayleigh distribution is often observed when the overall magnitude of a vector in the plane is related to its directional components. One example where the Rayleigh distribution naturally arises is when wind velocity is analyzed in two dimensions.
Given two jointly distributed random variables (X,Y), a functional representation of X is a random variable Z independent of Y, and a deterministic function g(⋅,⋅) such that X=g(Y,Z). The problem of finding a minimum entropy functional representation is kn ...
We propose a novel approach to evaluating the ionic Seebeck coefficient in electrolytes from relatively short equilibrium molecular dynamics simulations, based on the Green-Kubo theory of linear response and Bayesian regression analysis. By exploiting the ...
Amer Chemical Soc2024
,
We consider the problem of defining and fitting models of autoregressive time series of probability distributions on a compact interval of Double-struck capital R. An order-1 autoregressive model in this context is to be understood as a Markov chain, where ...