Entropy (information theory)In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable , which takes values in the alphabet and is distributed according to : where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys".
Inductive probabilityInductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world. There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis is Bayes' theorem.
Solomonoff's theory of inductive inferenceSolomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
Likelihood functionIn statistical inference, the likelihood function quantifies the plausibility of parameter values characterizing a statistical model in light of observed data. Its most typical usage is to compare possible parameter values (under a fixed set of observations and a particular model), where higher values of likelihood are preferred because they correspond to more probable parameter values.
Monte Carlo methodMonte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
An Essay towards solving a Problem in the Doctrine of ChancesAn Essay towards solving a Problem in the Doctrine of Chances is a work on the mathematical theory of probability by Thomas Bayes, published in 1763, two years after its author's death, and containing multiple amendments and additions due to his friend Richard Price. The title comes from the contemporary use of the phrase "doctrine of chances" to mean the theory of probability, which had been introduced via the title of a book by Abraham de Moivre.
Harold JeffreysSir Harold Jeffreys, FRS (22 April 1891 – 18 March 1989) was a British geophysicist who made significant contributions to mathematics and statistics. His book, Theory of Probability, which was first published in 1939, played an important role in the revival of the objective Bayesian view of probability. Jeffreys was born in Fatfield, County Durham, England, the son of Robert Hal Jeffreys, headmaster of Fatfield Church School, and his wife, Elizabeth Mary Sharpe, a school teacher.
Probability interpretationsThe word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory. There are two broad categories of probability interpretations which can be called "physical" and "evidential" probabilities.
Geometric distributionIn probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: The probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set ; The probability distribution of the number Y = X − 1 of failures before the first success, supported on the set . Which of these is called the geometric distribution is a matter of convention and convenience. These two different geometric distributions should not be confused with each other.
Precision (statistics)In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, . For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, . Other summary statistics of statistical dispersion also called precision (or imprecision) include the reciprocal of the standard deviation, ; the standard deviation itself and the relative standard deviation; as well as the standard error and the confidence interval (or its half-width, the margin of error).