**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Positive Definite Completions and Continuous Graphical Models

Abstract

This thesis concerns the theory of positive-definite completions and its mutually beneficial connections to the statistics of function-valued or continuously-indexed random processes, better known as functional data analysis. In particular, it dwells upon the reproducing kernel character of covariances.In the introduction, we attempt to summarize the basic ideas and thoughts upon which the thesis is built.Chapter 1 deals with the problem of covariance completion and develops an intuitive and interpretable approach to the problem of covariance estimation for fragmented functionaldata based on the concept of canonical completion. For a suitably restricted class of domains, we describe how the canonical completion may be constructed and use it to produce a characterization of the set of all completions. Furthermore, we identify necessary and sufficient conditions for uniqueness of completion and for the exact recovery of the true covariance. In doing so, we settle many fundamental questions concerning covariance estimation with fragmented functional data.Chapter 2 considers the problem of positive-definite completion in its generality and represents a purely mathematical treatment of the subject compared to Chapter 1. We study the problem for many classes of domains and present results concerning existence and uniqueness of solutions, their characterization and the existence and uniqueness of a special solution called canonical completion. We prove many new variational and algebraic characterizations of the canonical completion. Most importantly, we show the existence of canonical completion for the class of band-like domains. This leads to the existence of a canonical extension in the context of the classical problem of extensions of positive-definite functions, which is shown to correspond to a strongly continuous one-parameter semigroup and consequently, to an abstract Cauchy problem.Chapter 3 presents a rigorous generalization of undirected Gaussian graphical models to arbitrary, possibly uncountable index sets. We prove an inverse zero characterization forthese models, analogous the one known for multivariate graphical models and develop a procedure for their estimation based on the notion of resolution. The utility of the concept and method is illustrated using many real data applications and simulation studies.Chapter 4 concerns the problem of recovering conditional independence relationships between a finite number of jointly distributed second-order Hilbertian random elementsin a sparse high-dimensional regime with a particular interest in multivariate functional data. We propose an infinite-dimensional generalization of the multivariate graphicallasso and prove model selection consistency under natural assumptions. The method can be motivated from a coherent maximum likelihood philosophy.Chapter 5 discusses ways in which the results of this thesis can be strengthened or completed and its ideas and conclusions extended. With the only exception of Chapter5, all chapters are independent self-contained articles and can be read in an arbitrary order although the given order is recommended.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (16)

Related concepts (50)

Related publications (38)

Ontological neighbourhood

Algebra (part 1)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

Algebra (part 1)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

Algebra (part 2)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

Covariance

In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative.

Reproducing kernel Hilbert space

In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but does not converge uniformly i.

Sample mean and covariance

The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales.

In the rapidly evolving landscape of machine learning research, neural networks stand out with their ever-expanding number of parameters and reliance on increasingly large datasets. The financial cost and computational resources required for the training p ...

Mathieu Salzmann, Alexandre Massoud Alahi, Megh Hiren Shukla

Deep heteroscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood. However, recent works show that this may result in sub-optimal convergence due to the challenges associated ...

2024Victor Panaretos, Neda Mohammadi Jouzdani

Is it possible to detect if the sample paths of a stochastic process almost surely admit a finite expansion with respect to some/any basis? The determination is to be made on the basis of a finite collection of discretely/noisily observed sample paths. We ...