**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Model misspecification in peaks over threshold analysis

Abstract

Classical peaks over threshold analysis is widely used for statistical modeling of sample extremes, and can be supplemented by a model for the sizes of clusters of exceedances. Under mild conditions a compound Poisson process model allows the estimation of the marginal distribution of threshold exceedances and of the mean cluster size, but requires the choice of a threshold and of a run parameter, K, that determines how exceedances are declustered. We extend a class of estimators of the reciprocal mean cluster size, known as the extremal index, establish consistency and asymptotic normality, and use the compound Poisson process to derive misspecification tests of model validity and of the choice of run parameter and threshold. Simulated examples and real data on temperatures and rainfall illustrate the ideas, both for estimating the extremal index in nonstandard situations and for assessing the validity of extremal models.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (35)

Related publications (55)

Validity (statistics)

Validity is the main extent to which a concept, conclusion or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool (for example, a test in education) is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence (e.g. face validity, construct validity, etc.) described in greater detail below.

Poisson point process

In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field.

Compound Poisson distribution

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution. Suppose that i.e., N is a random variable whose distribution is a Poisson distribution with expected value λ, and that are identically distributed random variables that are mutually independent and also independent of N.

Ontological neighbourhood

Kamiar Aminian, Frédéric Meyer, Grégoire Millet, Mathieu Pascal Falbriard

A spring mass model is often used to describe human running, allowing to understand the concept of elastic energy storage and restitution. The stiffness of the spring is a key parameter and different methods have been developed to estimate both the vertica ...

Higher-order asymptotics provide accurate approximations for use in parametric statistical modelling. In this thesis, we investigate using higher-order approximations in two-specific settings, with a particular emphasis on the tangent exponential model. Th ...

Rachid Guerraoui, Jovan Komatovic, Pierre Philippe Civit, Manuel José Ribeiro Vidigueira, Seth Gilbert

The Byzantine consensus problem involves.. processes, out of which t < n could be faulty and behave arbitrarily. Three properties characterize consensus: (1) termination, requiring correct (nonfaulty) processes to eventually reach a decision, (2) agreement ...