Concept# Expected value

Summary

In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.
The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.
The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as E or \mathbb{E}.
History
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people

No results

Related concepts (122)

Probability distribution

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Statistics

Statistics (from German: Statistik, "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and present

Probability theory

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the conc

Related publications (29)

Related units

No results

Loading

Loading

Loading

Related courses (102)

CS-456: Artificial neural networks/reinforcement learning

Since 2010 approaches in deep learning have revolutionized fields as diverse as computer vision, machine learning, or artificial intelligence. This course gives a systematic introduction into influential models of deep artificial neural networks, with a focus on Reinforcement Learning.

MATH-233: Probability and statistics

The course gives an introduction to probability and statistics for physicists.

COM-417: Advanced probability and applications

In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequalities), while the second part focuses on the theory of martingales in discrete time.

We are interested in the study of non-correlation of Fourier coefficients of Maass forms against a wide class of real analytic functions. In particular, the class of functions we are interested in should be thought of as some archimedean analogs of Frobenius trace functions. In the first part of the thesis, we give an axiomatic definition for this class, and prove that these functions satisfy properties similar to that of Frobenius trace functions. In particular, we prove non-correlation statements analogous to those given by Fouvry, Kowalski and Michel for algebraic trace functions. In the second part of the thesis, we establish the existence of large values of Hecke-Maass L-functions with prescribed argument. In studying these problems, one encounters sums of Fourier coefficients of Maass forms against real oscillatory functions. In some cases, one can prove that these functions satisfy the axioms discussed previously.

We investigate the existence of large values of L-functions attached to Maass forms on the critical line with prescribed argument. The results obtained rely on the resonance method developed by Soundararajan and furthered by Hough.

Nathan Ramusat, Vincenzo Savona

Simulating the dynamics and the non-equilibrium steady state of an open quantum system are hard computational tasks on conventional computers. For the simulation of the time evolution, several efficient quantum algorithms have recently been developed. However, computing the non-equilibrium steady state as the long-time limit of the system dynamics is often not a viable solution, because of exceedingly long transient features or strong quantum correlations in the dynamics. Here, we develop an efficient quantum algorithm for the direct estimation of averaged expectation values of observables on the non-equilibrium steady state, thus bypassing the time integration of the master equation. The algorithm encodes the vectorized representation of the density matrix on a quantum register, and makes use of quantum phase estimation to approximate the eigenvector associated to the zero eigenvalue of the generator of the system dynamics. We show that the output state of the algorithm allows to estimate expectation values of observables on the steady state. Away from critical points, where the Liouvillian gap scales as a power law of the system size, the quantum algorithm performs with exponential advantage compared to exact diagonalization.

Related lectures (186)