**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Concept# Bayes estimator

Summary

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.
Suppose an unknown parameter is known to have a prior distribution . Let be an estimator of (based on some measurements x), and let be a loss function, such as squared error. The Bayes risk of is defined as , where the expectation is taken over the probability distribution of : this defines the risk function as a function of . An estimator is said to be a Bayes estimator if it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected loss for each also minimizes the Bayes risk and therefore is a Bayes estimator.
If the prior is improper then an estimator which minimizes the posterior expected loss for each is called a generalized Bayes estimator.
Minimum mean square error
The most common risk function used for Bayesian estimation is the mean square error (MSE), also called squared error risk. The MSE is defined by
where the expectation is taken over the joint distribution of and .
Using the MSE as risk, the Bayes estimate of the unknown parameter is simply the mean of the posterior distribution,
This is known as the minimum mean square error (MMSE) estimator.
Conjugate prior
If there is no inherent reason to prefer one prior probability distribution over another, a conjugate prior is sometimes chosen for simplicity. A conjugate prior is defined as a prior distribution belonging to some parametric family, for which the resulting posterior distribution also belongs to the same family. This is an important property, since the Bayes estimator, as well as its statistical properties (variance, confidence interval, etc.), can all be derived from the posterior distribution.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (440)

Related people (75)

Related units (7)

Related concepts (16)

Related courses (32)

Related lectures (150)

MATH-342: Time series

A first course in statistical time series analysis and applications.

MATH-232: Probability and statistics

A basic course in probability and statistics

, , , , , , , , ,

In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic loss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated.

In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution (that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate.

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson ('pwɑːsɒn; pwasɔ̃). The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume.

Monte Carlo: Markov Chains

Covers unsupervised learning, dimensionality reduction, SVD, low-rank estimation, PCA, and Monte Carlo Markov Chains.

Dynamic Model Simulation: EPFL Racing Team

Introduces a project to build a dynamic model for simulating racing car performance.

Inference Problems & Spin Glass Game

Covers inference problems related to the Spin Glass Game and the challenges of making mistakes with preb P.

Julien René Pierre Fageot, Sadegh Farhadkhani, Oscar Jean Olivier Villemaud, Le Nguyen Hoang

Many applications, e.g. in content recommendation, sports, or recruitment, leverage the comparisons of alternatives to score those alternatives. The classical Bradley-Terry model and its variants have been widely used to do so. The historical model conside ...

Marco Picasso, Paride Passelli

The p-Laplacian problem -del & sdot; ((mu + |del u|(p-2))del u) = f is considered, where mu is a given positive number. An anisotropic a posteriori residual-based error estimator is presented. The error estimator is shown to be equivalent, up to higher ord ...

In inverse problems, the task is to reconstruct an unknown signal from its possibly noise-corrupted measurements. Penalized-likelihood-based estimation and Bayesian estimation are two powerful statistical paradigms for the resolution of such problems. They ...