**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Multivariate normal distribution

Summary

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
Definitions
Notation and parameterization
The multivariate normal distribution of a k-dimensional random vector \mathbf{X} = (X_1,\ldots,X_k)^{\mathrm T} can be written in the following notation:
:
\mathbf{X}\ \sim\ \mathcal{N}(\boldsymbol\mu,, \boldsymbol\Sigma),

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related courses (85)

MATH-131: Probability and statistics

Le cours présente les notions de base de la théorie des probabilités et de l'inférence statistique. L'accent est mis sur les concepts principaux ainsi que les méthodes les plus utilisées.

MICRO-110: Design of experiments

This course provides an introduction to experimental statistics, including use of population statistics to characterize experimental results, use of comparison statistics and hypothesis testing to evaluate validity of experiments, and design, application, and analysis of multifactorial experiments

FIN-407: Financial econometrics

This course aims to give an introduction to the application of machine learning to finance. These techniques gained popularity due to the limitations of traditional financial econometrics methods tackling big data. We will review and compare traditional methods and machine learning algorithms.

Related publications (100)

Loading

Loading

Loading

Related people (26)

Related concepts (79)

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function

In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the co

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Related units (19)

Related lectures (302)

Jonathan A. Tawn, Jennifer Lynne Wadsworth

Existing theory for multivariate extreme values focuses upon characterizations of the distributional tails when all components of a random vector, standardized to identical margins, grow at the same rate. In this paper, we consider the effect of allowing the components to grow at different rates, and characterize the link between these marginal growth rates and the multivariate tail probability decay rate. Our approach leads to a whole class of univariate regular variation conditions, in place of the single but multivariate regular variation conditions that underpin the current theories. These conditions are indexed by a homogeneous function and an angular dependence function, which, for asymptotically independent random vectors, mirror the role played by the exponent measure and Pickands' dependence function in classical multivariate extremes. We additionally offer an inferential approach to joint survivor probability estimation. The key feature of our methodology is that extreme set probabilities can be estimated by extrapolating upon rays emanating from the origin when the margins of the variables are exponential. This offers an appreciable improvement over existing techniques where extrapolation in exponential margins is upon lines parallel to the diagonal.

Jonathan A. Tawn, Jennifer Lynne Wadsworth

Max-stable processes arise as the only possible nontrivial limits for maxima of affinely normalized identically distributed stochastic processes, and thus form an important class of models for the extreme values of spatial processes. Until recently, inference for max-stable processes has been restricted to the use of pairwise composite likelihoods, due to intractability of higher-dimensional distributions. In this work we consider random fields that are in the domain of attraction of a widely used class of max-stable processes, namely those constructed via manipulation of log-Gaussian random functions. For this class, we exploit limiting d-dimensional multivariate Poisson process intensities of the underlying process for inference on all d-vectors exceeding a high marginal threshold in at least one component, employing a censoring scheme to incorporate information below the marginal threshold. We also consider the d-dimensional distributions for the equivalent max-stable process, and perform full likelihood inference by exploiting the methods of Stephenson & Tawn (2005), where information on the occurrence times of extreme events is shown to dramatically simplify the likelihood. The Stephenson-Tawn likelihood is in fact simply a special case of the censored Poisson process likelihood. We assess the improvements in inference from both methods over pairwise likelihood methodology by simulation.

Since the 2008 Global Financial Crisis, the financial market has become more unpredictable than ever before, and it seems set to remain so in the forseeable future. This means an investor faces unprecedented risks, hence the increasing need for robust portfolio optimization to protect them against uncertainty, which is potentially devastating if unattended yet ignored in the classical Markowitz model, whose another deficiency is the absence of higher moments in its assumption of the distribution of asset returns. We establish an equivalence between the Markowitz model and the portfolio return value-at-risk optimization problem under multivariate normality of asset returns, so that we can add these excluded features into the former implicitly by incorporating them into the latter. We also provide a probabilistic smoothing spline approximation method and a deterministic model within the location-scale framework under elliptical distribution of the asset returns to solve the robust portfolio return value-at-risk optimization problem. In particular for the deterministic model, we introduce a novel eigendecomposition uncertainty set which lives in the positive definite space for the scale matrix without compromising on the computational complexity and conservativeness of the optimization problem, invent a method to determine the size of the involved uncertainty sets, test it out on real data, and explore its diversification properties. Although the value-at-risk has been the standard risk measure adopted by the banking and insurance industry since the early nineties, it has since attracted many criticisms, in particular from McNeil et al. (2005) and the Basel Committee on Banking Supervision in 2012, also known as Basel 3.5. Basel 4 even suggests a move away from the

`what" value-at-risk to the `

what-if" conditional value-at-risk' measure. We shall see that the former may be replaced with the latter or even other risk measures in our formulations easily.