In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the and directions contain all of the necessary information; a matrix would be necessary to fully characterize the two-dimensional variation. The covariance matrix of a random vector is typically denoted by or or . Throughout this article, boldfaced unsubscripted and are used to refer to random vectors, and Roman subscripted and are used to refer to scalar random variables. If the entries in the column vector are random variables, each with finite variance and expected value, then the covariance matrix is the matrix whose entry is the covariance where the operator denotes the expected value (mean) of its argument. Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications, call the matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector . Both forms are quite standard, and there is no ambiguity between them. The matrix is also often called the variance-covariance matrix, since the diagonal terms are in fact variances. By comparison, the notation for the cross-covariance matrix between two vectors is The auto-covariance matrix is related to the autocorrelation matrix by where the autocorrelation matrix is defined as .

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (32)
MATH-231: Probability and statistics I
Introduction to notions of probability and basic statistics.
MATH-342: Time series
A first course in statistical time series analysis and applications.
MATH-444: Multivariate statistics
Multivariate statistics focusses on inferring the joint distributional properties of several random variables, seen as random vectors, with a main focus on uncovering their underlying dependence struc
Show more
Related lectures (164)
Principal Component Analysis: Covariance Matrix and Eigenvalues
Explores Principal Component Analysis focusing on maximizing variance and finding eigenvalues.
Conditional Gaussian Generation
Explores the generation of multivariate Gaussian distributions and the challenges of factorizing covariance matrices.
Joint Distribution of Gaussian Random Vectors
Explores the criteria for Gaussian random vectors to have a joint PDF.
Show more
Related publications (297)
Related concepts (33)
Covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative.
Eigenvalues and eigenvectors
In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
Multivariate random variable
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.