Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture delves into the Stein Phenomenon, where the James-Stein Estimator dominates the Maximum Likelihood Estimator (MLE) in Gaussian estimation under quadratic loss. It challenges the belief in the MLE's universal optimality, showcasing the benefits of bias in high-dimensional statistics. The lecture also covers Hodges' Superefficient Estimator, highlighting its superiority over the MLE in certain scenarios. The discussion extends to asymptotic optimality, asymptotically Gaussian estimators, and the Cramér-Rao bound. Through examples and proofs, the lecture explores the intricacies of estimation theory, emphasizing the importance of regular sequences of estimators and the implications of superefficiency.