**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Anthony Christopher Davison

Biography

Anthony Davison has published on a wide range of topics in statistical theory and methods, and on environmental, biological and financial applications. His main research interests are statistics of extremes, likelihood asymptotics, bootstrap and other resampling methods, and statistical modelling, with a particular focus on the first currently. Statistics of extremes concerns rare events such as storms, high winds and tides, extreme pollution episodes, sporting records, and the like. The subject has a long history, but under the impact of engineering and environmental problems has been an area of intense development in the past 20 years. Davison''s PhD work was in this area, in a project joint between the Departments of Mathematics and Mechanical Engineering at Imperial College, with the aim of modelling potential high exposures to radioactivity due to releases from nuclear installations. The key tools developed, joint with Richard Smith, were regression models for exceedances over high thresholds, which generalized earlier work by hydrologists, and formed the basis of some important later developments. This has led to an ongoing interest in extremes, and in particular their application to environmental and financial data. A major current interest is the development of suitable methods for modelling rare spatio-temporal events, particularly but not only in the context of climate change. Likelihood asymptotics too have undergone very substantial development since 1980. Key tools here have been saddlepoint and related approximations, which can give remarkably accurate approximate distribution and density functions even for very small sample sizes. These approximations can be used for wide classes of parametric models, but also for certain bootstrap and resampling problems. The literature on these methods can seem arcane, but they are potentially widely applicable, and Davison wrote a book joint with Nancy Reid and Alessandra Brazzale intended to promote their use in applications. Bootstrap methods are now used in many areas of application, where they can provide a researcher with accurate inferences tailor-made to the data available, rather than relying on large-sample or other approximations of doubtful validity. The key idea is to replace analytical calculations of biases, variances, confidence and prediction intervals, and other measures of uncertainty with computer simulation from a suitable statistical model. In a nonparametric situation this model consists of the data themselves, and the simulation simply involves resampling from the existing data, while in a parametric case it involves simulation from a suitable parametric model. There is a wide range of possibilities between these extremes, and the book by Davison and Hinkley explores these for many data examples, with the aim of showing how and when resampling methods succeed and why they can fail. He was Editor of Biometrika (2008-2017), Joint Editor of Journal of the Royal Statistical Society, series B (2000-2003), editor of the IMS Lecture Notes Monograph Series (2007), Associate Editor of Biometrika (1987-1999), and Associate Editor of the Brazilian Journal of Probability and Statistics (1987 2006). Currently he on the editorial board of Annual Reviews of Statistics and its Applications. He has served on committees of Royal Statistical Society and of the Institute of Mathematical Statistics. He is an elected Fellow of the American Statistical Assocation and of the Institute of Mathematical Statistics, an elected member of the International Statistical Institute, and a Chartered Statistician. In 2009 he was awarded a laurea honoris causa in Statistical Science by the University of Padova, in 2011 he held a Francqui Chair at Hasselt University, and in 2012 he was Mitchell Lecturer at the University of Glasgow. In 2015 he received the Guy Medal in Silver of the Royal Statistical Society and in 2018 was a Medallion Lecturer of the Institute of Mathematical Statistics.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Courses taught by this person (5)

MATH-408: Regression methods

General graduate course on regression methods

MATH-413: Statistics for data science

Statistics lies at the foundation of data science, providing a unifying theoretical and methodological backbone for the diverse tasks enountered in this emerging field. This course rigorously develops the key notions and methods of statistics, with an emphasis on concepts rather than techniques.

MATH-562: Statistical inference

Inference from the particular to the general based on probability models is central to the statistical method. This course gives a graduate-level account of the main ideas of statistical inference.

Related research domains (71)

In common usage and statistics, data (USˈdætə; UKˈdeɪtə) is a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic uni

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model re

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) no

Related publications (139)

Loading

Loading

Loading

Related units (6)

People doing similar research (65)

Anthony Christopher Davison, Timmy Rong Tian Tse

Universal inference enables the construction of confidence intervals and tests without regularity conditions by splitting the data into two parts and appealing to Markov's inequality. Previous investigations have shown that the cost of this generality is a loss of power in regular settings for testing simple hypotheses. The present paper makes three contributions. We first clarify the reasons for the loss of power and use a simple illustrative example to investigate how the split proportion optimizing the power depends on the nominal size of the test. We then show that the presence of nuisance parameters can severely impact the power and suggest a simple asymptotic improvement. Finally, we show that combining many data splits can also sharply diminish power.

Valérie Chavez, Anthony Christopher Davison

Confounding variables are a recurrent challenge for causal discovery and inference. In many situations, complex causal mechanisms only manifest themselves in extreme events, or take simpler forms in the extremes. Stimulated by data on extreme river flows and precipitation, we introduce a new causal discovery methodology for heavy-tailed variables that allows the effect of a known potential confounder to be almost entirely removed when the variables have comparable tails, and also decreases it sufficiently to enable correct causal inference when the confounder has a heavier tail. We also introduce a new parametric estimator for the existing causal tail coefficient and a permutation test. Simulations show that the methods work well and the ideas are applied to the motivating dataset.

Anthony Christopher Davison, Raphaël Gérard Théodore Michel Marie de Deloÿe et Fourcade de Fondeville

Peaks-over-threshold analysis using the generalised Pareto distribution is widely applied in modelling tails of univariate random variables, but much information may be lost when complex extreme events are studied using univariate results. In this paper, we extend peaks-over-threshold analysis to extremes of functional data. Threshold exceedances defined using a functional r are modelled by the generalised r-Pareto process, a functional generalisation of the generalised Pareto distribution that covers the three classical regimes for the decay of tail probabilities, and that is the only possible continuous limit for r-exceedances of a properly rescaled process. We give construction rules, simulation algorithms and inference procedures for generalised r-Pareto processes, discuss model validation and apply the new methodology to extreme European windstorms and heavy spatial rainfall.