**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Robust Parameter Estimation for the Ornstein-Uhlenbeck Process

Abstract

In this thesis, we treat robust estimation for the parameters of the Ornstein–Uhlenbeck process, which are the mean, the variance, and the friction. We start by considering classical maximum likelihood estimation. For the simulation study, where we also investigate the choice of the time lag, we use the method of moment (MoM) estimator as initial estimator for the friction parameter of the maximum likelihood estimator (MLE). However, in several aspects the MLE is not robust. For robustification, we first derive elementary M-estimates by extending the method of M-estimation from Huber (1981). We use an intuitively robustified MoM estimate as initial estimate and compare by means of simulation the M-estimate with the MLE. This approach is, however, only ad-hoc since Huber’s minimum Fisher information and minimax asymptotic variance theory remains incomplete for simultaneous location and scale, and does not cover more general models (as for example the Ornstein–Uhlenbeck process). A more general robustness concept due to Kohl et al. (2010), Rieder (1994), and Staab (1984) is based on local asymptotic normality (LAN), asymptotically linear (AL) estimates, and shrinking neighborhoods. We then apply this concept to the Ornstein–Uhlenbeck process. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. For two kind of neighborhoods (average and average square neighborhoods) we obtain optimally robust ICs. In case of average neighborhoods, their graph exhibits surprising, redescending behavior. For average square neighborhoods the graph is between the one of the elementary M-estimates and the MLE. Finally, we discuss the estimator construction, that is, the problem of constructing an estimator from the family of optimal ICs. We carry out in our context the One-Step construction dating back to LeCam and use both an intuitively robustified MoM estimate and the elementary M-estimate as initial estimate. This results in optimally AL estimates (for average and average square neighborhoods). By means of simulation we then compare the different estimators: MLE, elementary M-estimates, and optimally AL estimates. In addition, we give an application to electricity prices.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (8)

Related publications (129)

Related concepts (34)

Selected Topics on Discrete Choice

Discrete choice models are used extensively in many disciplines where it is important to predict human behavior at a disaggregate level. This course is a follow up of the online course “Introduction t

Selected Topics on Discrete Choice

Discrete choice models are used extensively in many disciplines where it is important to predict human behavior at a disaggregate level. This course is a follow up of the online course “Introduction t

Neuronal Dynamics - Computational Neuroscience of Single Neurons

The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.

In this paper we propose an unbiased Monte Carlo maximum likelihood estimator for discretely observed Wright-Fisher diffusions. Our approach is based on exact simulation techniques that are of special interest for diffusion processes defined on a bounded d ...

Daniel Kuhn, Yves Rychener, Viet Anh Nguyen

The state-of-the-art methods for estimating high-dimensional covariance matrices all shrink the eigenvalues of the sample covariance matrix towards a data-insensitive shrinkage target. The underlying shrinkage transformation is either chosen heuristically ...

2024Maximum likelihood estimation

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

M-estimator

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. However, M-estimators are not inherently robust, as is clear from the fact that they include maximum likelihood estimators, which are in general not robust.

Robust statistics

Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution.

Giancarlo Ferrari Trecate, Florian Dörfler, Jean-Sébastien Hubert Brouillon

The increasing availability of sensing techniques provides a great opportunity for engineers to design state estimation methods, which are optimal for the system under observation and the observed noise patterns. However, these patterns often do not fulfil ...

2023