Summary
The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if is a random variable whose expected value is defined, and is any random variable on the same probability space, then i.e., the expected value of the conditional expected value of given is the same as the expected value of . One special case states that if is a finite or countable partition of the sample space, then Note: The conditional expected value E(X | Y), with Y a random variable, is not a simple number; it is a random variable whose value depend on the value of Y. That is, the conditional expected value of X given the event Y = y is a number and it is a function of y. If we write g(y) for the value of E(X | Y = y) then the random variable E(X | Y) is g(Y). Suppose that only two factories supply light bulbs to the market. Factory 's bulbs work for an average of 5000 hours, whereas factory 's bulbs work for an average of 4000 hours. It is known that factory supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for? Applying the law of total expectation, we have: where is the expected life of the bulb; is the probability that the purchased bulb was manufactured by factory ; is the probability that the purchased bulb was manufactured by factory ; is the expected lifetime of a bulb manufactured by ; is the expected lifetime of a bulb manufactured by . Thus each purchased light bulb has an expected lifetime of 4600 hours. When a joint probability density function is well defined and the expectations are integrable, we write for the general case A similar derivation works for discrete distributions using summation instead of integration. For the specific case of a partition, give each cell of the partition a unique label and let the random variable Y be the function of the sample space that assigns a cell's label to each point in that cell.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (12)
COM-417: Advanced probability and applications
In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequaliti
FIN-403: Econometrics
The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.
MATH-342: Time series
A first course in statistical time series analysis and applications.
Show more
Related lectures (65)
Martingale Theory: Basics and Applications
Covers the basics of Martingale theory and its applications in random variables.
Basic Properties of Conditional Expectation
Covers basic properties of conditional expectation and Jensen's inequality in probability theory.
Conditional Expectation: Properties & Jensen's Inequality
Covers the properties of conditional expectation and Jensen's inequality in probability theory.
Show more
Related publications (28)
Related concepts (4)
Law of total cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger. It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have where κ(X1, ...
Law of total variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random variables on the same probability space, and the variance of is finite, then In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation).
Conditional expectation
In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take "on average" over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values.
Show more