**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Generalized extreme value distribution

Summary

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In some fields of application the generalized extreme value distribution is known as the Fisher–Tippett distribution, named after Ronald Fisher and L. H. C. Tippett who recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of the Gumbel distribution. The origin of the common functional form for all 3 distributions dates back to at least Jenkinson, A. F. (1955), though allegedly it could also have been given by von Mises, R. (1936).
Using the standardized variable where the location parameter, can be any real number, and is the scale parameter; the cumulative distribution function of the GEV distribution is then
where the shape parameter, can be any real number. Thus, for , the expression is valid for while for it is valid for In the first case, is the negative, lower end-point, where is 0; in the second case, is the positive, upper end-point, where is 1. For the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, as in which case can be any real number.
In the special case of so and ≈ for whatever values and might have.
The probability density function of the standardized distribution is
again valid for in the case and for in the case The density is zero outside of the relevant range.

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (87)

Related people (19)

Related units (17)

Related concepts (16)

Related courses (8)

Related lectures (78)

Gumbel distribution

In probability theory and statistics, the Gumbel distribution (also known as the type-I generalized extreme value distribution) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions. This distribution might be used to represent the distribution of the maximum level of a river in a particular year if there was a list of maximum values for the past ten years. It is useful in predicting the chance that an extreme earthquake, flood or other natural disaster will occur.

Cumulative frequency analysis

Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called frequency of non-exceedance. Cumulative frequency analysis is performed to obtain insight into how often a certain phenomenon (feature) is below a certain value. This may help in describing or explaining a situation in which the phenomenon is involved, or in planning interventions, for example in flood protection.

Shape parameter

In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributions that is neither a location parameter nor a scale parameter (nor a function of these, such as a rate parameter). Such a parameter must affect the shape of a distribution rather than simply shifting it (as a location parameter does) or stretching/shrinking it (as a scale parameter does). For example, "peakedness" refers to how round the main peak is.

ENV-524: Hydrological risks and structures

Le cours est une introduction à la théorie des valeurs extrêmes et son utilisation pour la gestion des risques hydrologiques (essentiellement crues). Une ouverture plus large sur la gestion des danger

FIN-417: Quantitative risk management

This course is an introduction to quantitative risk management that covers standard statistical methods, multivariate risk factor models, non-linear dependence structures (copula models), as well as p

Extreme Value Theory: Maximum DistributionMATH-447: Risk, rare events and extremes

Explores extreme value theory, focusing on maximum distribution and different types of distributions based on shape parameters.

Extreme Value Theory: Dependence EffectsMATH-447: Risk, rare events and extremes

Explores the extremal index and its impact on extreme events in stationary processes, along with the D'(u) condition for short-range dependence modeling.

Extreme Value Theory: Maxima and ExceedancesMATH-447: Risk, rare events and extremes

Delves into Extreme Value Theory, discussing maxima, exceedances, and their statistical implications.

Jean-François Molinari, Sacha Zenon Wattel

Atomistic simulations performed with a family of model potential with tunable hardness have proven to be a great tool for advancing the understanding of wear processes at the asperity level. They have been instrumental in finding a critical length scale, w ...

2024Catalina Paz Alvarez Inostroza

Keeping track of Internet latency is a classic measurement problem. Open measurement platforms like RIPE Atlas are a great solution, but they also face challenges: preventing network overload that may result from uncontrolled active measurements, and maint ...

2023Jean-Pierre Hubaux, Juan Ramón Troncoso-Pastoriza, Jean-Philippe Léonard Bossuat, Apostolos Pyrgelis, David Jules Froelicher, Joao André Gomes de Sá e Sousa

Principal component analysis (PCA) is an essential algorithm for dimensionality reduction in many data science domains. We address the problem of performing a federated PCA on private data distributed among multiple data providers while ensuring data confi ...