Erlang distributionThe Erlang distribution is a two-parameter family of continuous probability distributions with support . The two parameters are: a positive integer the "shape", and a positive real number the "rate". The "scale", the reciprocal of the rate, is sometimes used instead. The Erlang distribution is the distribution of a sum of independent exponential variables with mean each. Equivalently, it is the distribution of the time until the kth event of a Poisson process with a rate of .
Factor analysisFactor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables.
Confidence distributionIn statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation (fiducial distribution), although it is a purely frequentist concept.
Statistical hypothesis testingA statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see .
Meta-analysisA meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analyses can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. It is thus a basic methodology of Metascience.
Binomial proportion confidence intervalIn statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known. There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a binomial distribution.
Independent and identically distributed random variablesIn probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing. Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly.
Life expectancyLife expectancy is a statistical measure of the estimate of the span of a life. The most commonly used measure is life expectancy at birth (LEB), which can be defined in two ways. Cohort LEB is the mean length of life of a birth cohort (in this case, all individuals born in a given year) and can be computed only for cohorts born so long ago that all their members have died. Period LEB is the mean length of life of a hypothetical cohort assumed to be exposed, from birth through death, to the mortality rates observed at a given year.
Statistical modelA statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables.
Review articleA review article is an article that summarizes the current state of understanding on a topic within a certain discipline. A review article is generally considered a secondary source since it may analyze and discuss the method and conclusions in previously published studies. It resembles a survey article or, in news publishing, overview article, which also surveys and summarizes previously published primary and secondary sources, instead of reporting new facts and results.