Concept

Distribution of the product of two random variables

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution. The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDF's yet the concepts are often ambiguously termed as "product of Gaussians". Algebra of random variables The product is one type of algebra for random variables: Related to the product distribution are the ratio distribution, sum distribution (see List of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables. If and are two independent, continuous random variables, described by probability density functions and then the probability density function of is We first write the cumulative distribution function of starting with its definition We find the desired probability density function by taking the derivative of both sides with respect to . Since on the right hand side, appears only in the integration limits, the derivative is easily performed using the fundamental theorem of calculus and the chain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.) where the absolute value is used to conveniently combine the two terms. A faster more compact proof begins with the same step of writing the cumulative distribution of starting with its definition: where is the Heaviside step function and serves to limit the region of integration to values of and satisfying . We find the desired probability density function by taking the derivative of both sides with respect to . where we utilize the translation and scaling properties of the Dirac delta function .

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (32)
MATH-232: Probability and statistics (for IC)
A basic course in probability and statistics
COM-417: Advanced probability and applications
In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequaliti
MATH-230: Probability
Le cours est une introduction à la théorie des probabilités. Le but sera d'introduire le formalisme moderne (basé sur la notion de mesure), de lier celui-ci à l'aspect "intuitif" des probabilités mais
Show more
Related lectures (228)
Large Deviations Principle: Cramer's Theorem
Covers Cramer's theorem and Hoeffding's inequality in the context of the large deviations principle.
Probability Theory: Laws and Convergence
Covers Borel-Cantelli lemmas, laws of random variables, and convergence in probability.
Concentration Inequalities: Hoeffding's Inequality
Covers Hoeffding's inequality and concentration inequalities with a focus on sequences of random variables.
Show more
Related publications (101)
Related concepts (2)
Sum of normally distributed random variables
In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables. This is not to be confused with the sum of normal distributions which forms a mixture distribution. Let X and Y be independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed. i.e., if then This means that the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances (i.
Log-normal distribution
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.