**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Conditional expectation

Summary

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take "on average" over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.
Depending on the context, the conditional expectation can be either a random variable or a function. The random variable is denoted E(X\mid Y) analogously to conditional probability. The function form is either denoted E(X\mid Y=y) or a separate function symbol such as f(y) is introduced with the meaning E(X\mid Y) = f(Y).
Examples

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people

No results

Related publications (9)

Related units

No results

Loading

Loading

Loading

Related concepts (23)

Linear regression

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variable

Ordinary least squares

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear functio

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a '

Related courses (34)

COM-417: Advanced probability and applications

In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequalities), while the second part focuses on the theory of martingales in discrete time.

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

MATH-233: Probability and statistics

The course gives an introduction to probability and statistics for physicists.

Related lectures (76)

Sundar Subramaniam Ganesh, Sebastian Krumscheid, Fabio Nobile

In this work, we consider the problem of estimating the probability distribution, the quantile or the conditional expectation above the quantile, the so called conditional-value-at-risk, of output quantities of complex random differential models by the MLMC method. We follow the approach of (reference), which recasts the estimation of the above quantities to the computation of suitable parametric expectations. In this work, we present novel computable error estimators for the estimation of such quantities, which are then used to optimally tune the MLMC hierarchy in a continuation type adaptive algorithm. We demonstrate the efficiency and robustness of our adaptive continuation-MLMC in an array of numerical test cases.

2022Ahmet Arda Atalik, Michael Christoph Gastpar, Alper Köse

This paper considers an additive Gaussian noise channel with arbitrarily distributed finite variance input signals. It studies the differential entropy of the minimum mean-square error (MMSE) estimator and provides a new lower bound which connects the differential entropy of the input, output, and conditional mean. That is, the sum of differential entropies of the conditional mean and output is always greater than or equal to twice the input differential entropy. Various other properties such as upper bounds, asymptotics, Taylor series expansion, and connection to Fisher Information are obtained. An application of the lower bound in the remote-source coding problem is discussed, and extensions of the lower and upper bounds to the vector Gaussian channel are given.

Paulo Vitor Da Costa Pereira, Anthony Christopher Davison, Isolde Santos Previdelli

Accurately quantifying extreme rainfall is important for the design of hydraulic structures, for flood mapping and zoning and for disaster management. In order to produce maps of estimates of 25-year rainfall return levels in Brazil, we selected 893 shorter and 104 longer rainfall time series from the Agencia Nacional de Aguas (ANA), and applied the framework of extreme value theory. Care was needed to reduce the impact of poor data. Estimates of the shape parameter of the extreme-value model fitted to rainfall data are typically biased, so we discuss an empirical correction that takes into account not only the sample-size bias, but also a so-called penultimate approximation that we use to inform a Bayesian spatial latent variable model for the annual rainfall maxima. This model accounts for subtle patterns of spatial variation in the data and provides plausible return level estimates.