In probability and statistics, the quantile function outputs the value of a random variable such that its probability is less than or equal to an input probability value. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function (after the percentile), percent-point function or inverse cumulative distribution function (after the cumulative distribution function).
With reference to a continuous and strictly monotonic cumulative distribution function of a random variable X, the quantile function maps its input p to a threshold value x so that the probability of X being less or equal than x is p. In terms of the distribution function F, the quantile function Q returns the value x such that
which can be written as inverse of the c.d.f.
In the general case of distribution functions that are not strictly monotonic and therefore do not permit an inverse c.d.f., the quantile is a (potentially) set valued functional of a distribution function F, given by the interval
It is often standard to choose the lowest value, which can equivalently be written as (using right-continuity of F)
Here we capture the fact that the quantile function returns the minimum value of x from amongst all those values whose c.d.f value exceeds p, which is equivalent to the previous probability statement in the special case that the distribution is continuous. Note that the infimum function can be replaced by the minimum function, since the distribution function is right-continuous and weakly monotonically increasing.
The quantile is the unique function satisfying the Galois inequalities
if and only if
If the function F is continuous and strictly monotonically increasing, then the inequalities can be replaced by equalities, and we have:
In general, even though the distribution function F may fail to possess a left or right inverse, the quantile function Q behaves as an "almost sure left inverse" for the distribution function, in the sense that
almost surely.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course is an introduction to quantitative risk management that covers standard statistical methods, multivariate risk factor models, non-linear dependence structures (copula models), as well as p
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson ('pwɑːsɒn; pwasɔ̃). The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume.
In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails (higher kurtosis). The logistic distribution is a special case of the Tukey lambda distribution.
In probability theory, the probability integral transform (also known as universality of the uniform) relates to the result that data values that are modeled as being random variables from any given continuous distribution can be converted to random variables having a standard uniform distribution. This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data, the result will hold approximately in large samples.
We use generalized Ray-Knight theorems, introduced by B. Toth in 1996, together with techniques developed for excited random walks as main tools for establishing positive and negative results concerning convergence of some classes of diffusively scaled sel ...
Learn the basics of plasma, one of the fundamental states of matter, and the different types of models used to describe it, including fluid and kinetic.
Learn the basics of plasma, one of the fundamental states of matter, and the different types of models used to describe it, including fluid and kinetic.
Learn about plasma applications from nuclear fusion powering the sun, to making integrated circuits, to generating electricity.
Simulations of plasma turbulence in a linear plasma device configuration are presented. These simulations are based on a simplified version of the gyrokinetic (GK) model proposed by Frei et al. [J. Plasma Phys. 86, 905860205 (2020)], where the full-F distr ...
The goal of this paper is to characterize function distributions that general neural networks trained by descent algorithms (GD/SGD), can or cannot learn in polytime. The results are: (1) The paradigm of general neural networks trained by SGD is poly-time ...