**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# From pointwise testing to a regional vision: An integrated statistical approach to detect nonstationarity in extreme daily rainfall. Application to the Sahelian region

Abstract

Global warming is expected to intensify the hydrologic cycle. Documenting whether significant changes in the extreme precipitation regimes have already happened is consequently one of the challenging topics in climatic research. The high natural variability of extreme precipitation often prevents from obtaining significant results when testing changes in the empirical distribution of extreme rainfall at regional scale. A regional integrated approach is proposed here as one possible answer to this complex methodological problem. Three methods are combined in order to detect regionally significant trends and/or breakpoints in series of annual maximum daily rainfall: (1) individual stationarity tests applied to the raw point series of maxima, (2) a maximum likelihood testing of time-dependent generalized extreme value (GEV) distributions fitted to these series, and (3) a heuristic testing of a regional time-dependent GEV distribution. This approach is applied to a set of 126 daily rain gauges covering the Sahel over the period 1950-1990. It is found that only a few stations are tested as nonstationary when applying classical tests on the raw series, while the two GEV-based models converge to show that the extreme rainfall series indeed underwent a negative breakpoint around 1970. The study evidences the limits of the widely used classical stationarity tests to detect trends in noisy series affected by sampling uncertainties, while using a parametric space and time-dependent GEV efficiently reduces this effect. Showing that the great Sahelian drought was accompanied by a significant decrease of extreme rainfall events is the other main result of this study.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (11)

Related concepts (32)

Related publications (36)

Ontological neighbourhood

Neuronal Dynamics - Computational Neuroscience of Single Neurons

The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.

Neuronal Dynamics - Computational Neuroscience of Single Neurons

The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.

Selected Topics on Discrete Choice

Discrete choice models are used extensively in many disciplines where it is important to predict human behavior at a disaggregate level. This course is a follow up of the online course “Introduction t

The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population.

In statistics, an empirical distribution function (commonly also called an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

Higher-order asymptotics provide accurate approximations for use in parametric statistical modelling. In this thesis, we investigate using higher-order approximations in two-specific settings, with a particular emphasis on the tangent exponential model. Th ...

Alexandre Massoud Alahi, Megh Hiren Shukla

Advances in computing have enabled widespread access to pose estimation, creating new sources of data streams. Unlike mock set-ups for data collection, tapping into these data streams through on-device active learning allows us to directly sample from the ...

This work extends the range of pathways for the production of metallic microcomponents by downscaling metal casting. This is accomplished by using either of two different molding techniques, namely femtosecond laser micromachining or lithographic silicon m ...