Résumé
The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information). Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the best choice. The principle was first expounded by E. T. Jaynes in two papers in 1957 where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes offered a new and very general rationale why the Gibbsian method of statistical mechanics works. He argued that the entropy of statistical mechanics and the information entropy of information theory are basically the same thing. Consequently, statistical mechanics should be seen just as a particular application of a general tool of logical inference and information theory. In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method. The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods, statistical mechanics and logical inference in particular. The maximum entropy principle makes explicit our freedom in using different forms of prior data.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (10)
COM-406: Foundations of Data Science
We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an
MATH-496: Computational linear algebra
This is an introductory course to the concentration of measure phenomenon - random functions that depend on many random variables tend to be often close to constant functions.
PHYS-467: Machine learning for physicists
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
Afficher plus
Publications associées (139)
Concepts associés (16)
Probabilité a priori
Dans le théorème de Bayes, la probabilité a priori (ou prior) désigne une probabilité se fondant sur des données ou connaissances antérieures à une observation. Elle s'oppose à la probabilité a posteriori (ou posterior) correspondante qui s'appuie sur les connaissances postérieures à cette observation. Le théorème de Bayes s'énonce de la manière suivante : si . désigne ici la probabilité a priori de , tandis que désigne la probabilité a posteriori, c'est-à-dire la probabilité conditionnelle de sachant .
Famille exponentielle
En théorie des probabilités et en statistique, une famille exponentielle est une classe de lois de probabilité dont la forme générale est donnée par : où est la variable aléatoire, est un paramètre et est son paramètre naturel. Les familles exponentielles présentent certaines propriétés algébriques et inférentielles remarquables. La caractérisation d'une loi en famille exponentielle permet de reformuler la loi à l'aide de ce que l'on appelle des paramètres naturels.
Partition function (mathematics)
The partition function or configuration integral, as used in probability theory, information theory and dynamical systems, is a generalization of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated probability measure, the Gibbs measure, has the Markov property.
Afficher plus