In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information.
Suppose we observe random variables , independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters given the data is
We define the observed information matrix at as
Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction. The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted.
Andrew Gelman, David Dunson and Donald Rubin define observed information instead in terms of the parameters' posterior probability, :
The Fisher information is the expected value of the observed information given a single observation distributed according to the hypothetical model with parameter :
The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of is ignored. In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
En statistique, l'estimateur du maximum de vraisemblance est un estimateur statistique utilisé pour inférer les paramètres de la loi de probabilité d'un échantillon donné en recherchant les valeurs des paramètres maximisant la fonction de vraisemblance. Cette méthode a été développée par le statisticien Ronald Aylmer Fisher en 1922. Soient neuf tirages aléatoires x1, ..., x9 suivant une même loi ; les valeurs tirées sont représentées sur les diagrammes ci-dessous par des traits verticaux pointillés.
This is an introductory course in the theory of statistics, inference, and machine learning, with an emphasis on theoretical understanding & practical exercises. The course will combine, and alternat
This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees
Discute de la probabilité que les trains à pics soient basés sur des modèles générateurs et des calculs de log-probabilité à partir des données observées.
Couvre la régression avec des réponses familiales exponentielles à l'aide de modèles linéaires généralisés.
Explore l'estimation linéaire, les critères optimaux et le principe d'orthogonalité pour de bons choix en matière d'estimation.
Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze at ...
Deep heteroscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood. However, recent works show that this may result in sub-optimal convergence due to the challenges associated ...
2024
, ,
Many research questions concern treatment effects on outcomes that can recur several times in the same individual. For example, medical researchers are interested in treatment effects on hospitalizations in heart failure patients and sports injuries in ath ...