Independent and identically distributed random variablesIn probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing. Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly.
Binomial distributionIn probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.
Cauchy distributionThe Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.
Uncorrelatedness (probability theory)In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them. Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined.
Lévy's continuity theoremIn probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions. This theorem is the basis for one approach to prove the central limit theorem and is one of the major theorems concerning characteristic functions. Suppose we have If the sequence of characteristic functions converges pointwise to some function then the following statements become equivalent: Rigorous proofs of this theorem are available.
Pafnuty ChebyshevPafnuty Lvovich Chebyshev (Пафну́тий Льво́вич Чебышёв) ( – ) was a Russian mathematician and considered to be the founding father of Russian mathematics. Chebyshev is known for his fundamental contributions to the fields of probability, statistics, mechanics, and number theory. A number of important mathematical concepts are named after him, including the Chebyshev inequality (which can be used to prove the weak law of large numbers), the Bertrand–Chebyshev theorem, Chebyshev polynomials, Chebyshev linkage, and Chebyshev bias.
Random variateIn probability and statistics, a random variate or simply variate is a particular outcome of a random variable; the random variates which are other outcomes of the same random variable might have different values (random numbers). A random deviate or simply deviate is the difference of a random variate with respect to the distribution central location (e.g., mean), often divided by the standard deviation of the distribution (i.e., as a standard score). Random variates are used when simulating processes driven by random influences (stochastic processes).
Law of the iterated logarithmIn probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Ya. Khinchin (1924). Another statement was given by A. N. Kolmogorov in 1929. Let {Yn} be independent, identically distributed random variables with means zero and unit variances. Let Sn = Y1 + ... + Yn. Then where “log” is the natural logarithm, “lim sup” denotes the limit superior, and “a.s.” stands for “almost surely”.
Abraham de MoivreAbraham de Moivre FRS (abʁaam də mwavʁ; 26 May 1667 - 27 November 1754) was a French mathematician known for de Moivre's formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He moved to England at a young age due to the religious persecution of Huguenots in France which reached a climax in 1685 with the Edict of Fontainebleau. He was a friend of Isaac Newton, Edmond Halley, and James Stirling.
Empirical distribution functionIn statistics, an empirical distribution function (commonly also called an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.