In statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by .
An estimator is called an extremum estimator, if there is an objective function such that
where Θ is the parameter space. Sometimes a slightly weaker definition is given:
where op(1) is the variable converging in probability to zero. With this modification doesn't have to be the exact maximizer of the objective function, just be sufficiently close to it.
The theory of extremum estimators does not specify what the objective function should be. There are various types of objective functions suitable for different models, and this framework allows us to analyse the theoretical properties of such estimators from a unified perspective. The theory only specifies the properties that the objective function has to possess, and so selecting a particular objective function only requires verifying that those properties are satisfied.
If the parameter space Θ is compact and there is a limiting function Q0(θ) such that: converges to Q0(θ) in probability uniformly over Θ, and the function Q0(θ) is continuous and has a unique maximum at θ = θ0 then is consistent for θ0.
The uniform convergence in probability of means that
The requirement for Θ to be compact can be replaced with a weaker assumption that the maximum of Q0 was well-separated, that is there should not exist any points θ that are distant from θ0 but such that Q0(θ) were close to Q0(θ0). Formally, it means that for any sequence {θi} such that Q0(θi) → Q0(θ0), it should be true that θi → θ0.
Assuming that consistency has been established and the derivatives of the sample satisfy some other conditions, the extremum estimator converges to an asymptotically Normal distribution.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable. The method requires that a certain number of moment conditions be specified for the model.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In statistical inference, the likelihood function quantifies the plausibility of parameter values characterizing a statistical model in light of observed data. Its most typical usage is to compare possible parameter values (under a fixed set of observations and a particular model), where higher values of likelihood are preferred because they correspond to more probable parameter values.
Large-scale time series analysis is performed by a new statistical tool that is superior to other estimators of complex state-space models. The identified stochastic dependences can be used for sensor
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
This course covers formal frameworks for causal inference. We focus on experimental designs, definitions of causal models, interpretation of causal parameters and estimation of causal effects.
We consider the problem of nonparametric estimation of the drift and diffusion coefficients of a Stochastic Differential Equation (SDE), based on n independent replicates {Xi(t) : t is an element of [0 , 1]}13 d B(t), where alpha is an element of {0 , 1} a ...
Amsterdam2023
, , ,
We propose nonparametric estimators for the second-order central moments of possibly anisotropic spherical random fields, within a functional data analysis context. We consider a measurement framework where each random field among an identically distribute ...
We study the problem of learning unknown parameters of stochastic dynamical models from data. Often, these models are high dimensional and contain several scales and complex structures. One is then interested in obtaining a reduced, coarse-grained descript ...