Systematic samplingIn survey methodology, systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method. In this approach, progression through the list is treated circularly, with a return to the top once the list ends. The sampling starts by selecting an element from the list at random and then every kth element in the frame is selected, where k, is the sampling interval (sometimes known as the skip): this is calculated as: where n is the sample size, and N is the population size.
Non-uniform random variate generationNon-uniform random variate generation or pseudo-random number sampling is the numerical practice of generating pseudo-random numbers (PRN) that follow a given probability distribution. Methods are typically based on the availability of a uniformly distributed PRN generator. Computational algorithms are then used to manipulate a single random variate, X, or often several such variates, into a new random variate Y such that these values have the required distribution.
Nyquist–Shannon sampling theoremThe Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing distortion. In practice, it is used to select band-limiting filters to keep aliasing distortion below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function.
RandomnessIn common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events (or "trials") is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4.
Cox processIn probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space (often space or time) is itself a stochastic process. The process is named after the statistician David Cox, who first published the model in 1955.
Gibbs samplingIn statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution (e.g., to generate a histogram of the distribution); to approximate the marginal distribution of one of the variables, or some subset of the variables (for example, the unknown parameters or latent variables); or to compute an integral (such as the expected value of one of the variables).
Sampling errorIn statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. It can produced biased results. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error.
Random variableA random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' can be misleading as it is not actually random nor a variable, but rather it is a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads and tails ) in a sample space (e.g., the set ) to a measurable space (e.g., in which 1 corresponding to and −1 corresponding to ), often to the real numbers.
Sample and holdIn electronics, a sample and hold (also known as sample and follow) circuit is an analog device that samples (captures, takes) the voltage of a continuously varying analog signal and holds (locks, freezes) its value at a constant level for a specified minimum period of time. Sample and hold circuits and related peak detectors are the elementary analog memory devices. They are typically used in analog-to-digital converters to eliminate variations in input signal that can corrupt the conversion process.
Inverse transform samplingInverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, or the golden rule) is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function. Inverse transformation sampling takes uniform samples of a number between 0 and 1, interpreted as a probability, and then returns the smallest number such that for the cumulative distribution function of a random variable.