In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field or stochastic process Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y:
The same C(x, y) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).
For locations x1, x2, ..., xN ∈ D the variance of every linear combination
can be computed as
A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite.
In case of a weakly stationary random field, where
for any lag h, the covariance function can be represented by a one-parameter function
which is called a covariogram and also a covariance function. Implicitly the C(xi, xj) can be computed from Cs(h) by:
The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.
For a given variance , a simple stationary parametric covariance function is the "exponential covariance function"
where V is a scaling parameter (correlation length), and d = d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function:
is a stationary covariance function with smooth sample paths.
The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Real-world engineering applications must cope with a large dataset of dynamic variables, which cannot be well approximated by classical or deterministic models. This course gives an overview of method
This course will present some of the core advanced methods in the field for structure discovery, classification and non-linear regression. This is an advanced class in Machine Learning; hence, student
This course is an introduction to quantitative risk management that covers standard statistical methods, multivariate risk factor models, non-linear dependence structures (copula models), as well as p
This paper introduces a new modeling and inference framework for multivariate and anisotropic point processes. Building on recent innovations in multivariate spatial statistics, we propose a new family of multivariate anisotropic random fields, and from th ...
WILEY2023
,
We consider the problem of positive-semidefinite continuation: extending a partially specified covariance kernel from a subdomain Omega of a rectangular domain I x I to a covariance kernel on the entire domain I x I. For a broad class of domains Omega call ...
INST MATHEMATICAL STATISTICS-IMS2022
, ,
We consider the problem of comparing several samples of stochastic processes with respect to their second-order structure, and describing the main modes of variation in this second order structure, if present. These tasks can be seen as an Analysis of Vari ...
In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g.
In statistics, originally in geostatistics, kriging or Kriging, (pronounced /ˌˈkɹiːɡɪŋ/) also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness (e.g., smoothing spline) may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments.
In physics and mathematics, a random field is a random function over an arbitrary domain (usually a multi-dimensional space such as ). That is, it is a function that takes on a random value at each point (or some other domain). It is also sometimes thought of as a synonym for a stochastic process with some restriction on its index set. That is, by modern definitions, a random field is a generalization of a stochastic process where the underlying parameter need no longer be real or integer valued "time" but can instead take values that are multidimensional vectors or points on some manifold.