**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Statistical Theory: Maximum Likelihood Estimation

Description

This lecture delves into the consistency of the Maximum Likelihood Estimator (MLE) and its asymptotic properties. It explores the relationship between the MLE and the Kullback-Leibler Divergence, highlighting the challenges in proving the MLE's consistency. The lecture discusses deterministic examples to illustrate the complexities of the MLE's behavior. It also covers the construction of asymptotically MLE-like estimators and the Newton-Raphson algorithm. The lecture concludes with a discussion on misspecified models and likelihood, emphasizing the importance of model approximation and the behavior of estimators in such scenarios.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

In course

Related concepts (44)

Statistical theory

The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.

Statistical inference

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.

Statistical hypothesis testing

A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see .

Statistical model

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables.

Maximum likelihood estimation

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

Related lectures (307)

Likelihood Ratio Tests: Optimality and ExtensionsMATH-442: Statistical theory

Covers Likelihood Ratio Tests, their optimality, and extensions in hypothesis testing, including Wilks' Theorem and the relationship with Confidence Intervals.

The Stein Phenomenon and SuperefficiencyMATH-442: Statistical theory

Explores the Stein Phenomenon, showcasing the benefits of bias in high-dimensional statistics and the superiority of the James-Stein Estimator over the Maximum Likelihood Estimator.

Statistical Theory: Inference and OptimalityMATH-442: Statistical theory

Explores constructing confidence regions, inverting hypothesis tests, and the pivotal method, emphasizing the importance of likelihood methods in statistical inference.

Probability and StatisticsMATH-232: Probability and statistics

Covers p-quantile, normal approximation, joint distributions, and exponential families in probability and statistics.

Bias and Variance in EstimationMATH-232: Probability and statistics

Discusses bias and variance in statistical estimation, exploring the trade-off between accuracy and variability.