**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Characteristic function (probability theory)

Summary

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.
The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.
The characteristic function is a way to describe a random variable.
The characteristic function,
a function of t,
completely determines the behavior and properties of the probability distribution of the random variable X.
The characteristic function is similar to the cumulative distribution function,
(where 1{X ≤ x} is the indicator function — it is equal to 1 when X ≤ x, and zero otherwise), which also completely determines the behavior and properties of the probability distribution of the random variable X. The two approaches are equivalent in the sense that knowing one of the functions it is always possible to find the other, yet they provide different insights for understanding the features of the random variable. Moreover, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions.

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (68)

Related courses (30)

Related lectures (275)

COM-417: Advanced probability and applications

In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequaliti

MATH-232: Probability and statistics

A basic course in probability and statistics

COM-300: Stochastic models in communication

L'objectif de ce cours est la maitrise des outils des processus stochastiques utiles pour un ingénieur travaillant dans les domaines des systèmes de communication, de la science des données et de l'i

Law of large numbers

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. The LLN is important because it guarantees stable long-term results for the averages of some random events.

Lévy's continuity theorem

In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions. This theorem is the basis for one approach to prove the central limit theorem and is one of the major theorems concerning characteristic functions. Suppose we have If the sequence of characteristic functions converges pointwise to some function then the following statements become equivalent: Rigorous proofs of this theorem are available.

Infinite divisibility (probability)

In probability theory, a probability distribution is infinitely divisible if it can be expressed as the probability distribution of the sum of an arbitrary number of independent and identically distributed (i.i.d.) random variables. The characteristic function of any infinitely divisible distribution is then called an infinitely divisible characteristic function. More rigorously, the probability distribution F is infinitely divisible if, for every positive integer n, there exist n i.i.d. random variables Xn1, .

Information MeasuresCOM-406: Foundations of Data Science

Covers information measures like entropy, Kullback-Leibler divergence, and data processing inequality, along with probability kernels and mutual information.

Information MeasuresCOM-406: Foundations of Data Science

Covers information measures like entropy and Kullback-Leibler divergence.

Exponential Family Models: Statistical InferenceMATH-562: Statistical inference

Covers exponential family models and their statistical properties, including canonical statistics and cumulant-generating functions.