**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Positive-definite kernel

Résumé

In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solving integral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally in Fourier analysis, probability theory, operator theory, complex function-theory, moment problems, integral equations, boundary-value problems for partial differential equations, machine learning, embedding problem, information theory, and other areas.
Definition
Let \mathcal X be a nonempty set, sometimes referred to as the index set. A symmetric function K: \mathcal X \times \mathcal X \to \mathbb{R} is called a positive-definite (p.d.) kernel on \mathcal X if
holds for any x_1, \dots, x_n\in \mathc

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées

Aucun résultat

Unités associées

Aucun résultat

Publications associées (8)

Chargement

Chargement

Chargement

Concepts associés (7)

Astuce du noyau

En apprentissage automatique, l'astuce du noyau, ou kernel trick en anglais, est une méthode qui permet d'utiliser un classifieur linéaire pour résoudre un problème non linéaire. L'idée est de transfo

Espace de Hilbert à noyau reproduisant

En analyse fonctionnelle, un espace de Hilbert à noyau reproduisant est un espace de Hilbert de fonctions pour lequel toutes les applications f \mapsto f(x) sont des formes linéaires co

Integral transform

In mathematics, an integral transform maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more

Cours associés (3)

Participants of this course will master computational techniques frequently used in mathematical finance applications. Emphasis will be put on the implementation and practical aspects.

We develop, analyze and implement numerical algorithms to solve optimization problems of the form: min f(x) where x is a point on a smooth manifold. To this end, we first study differential and Riemannian geometry (with a focus dictated by pragmatic concerns). We also discuss several applications.

A course on statistical machine learning for supervised and unsupervised learning

Machine Learning is a modern and actively developing field of computer science, devoted to extracting and estimating dependencies from empirical data. It combines such fields as statistics, optimization theory and artificial intelligence. In practical tasks, the general aim of Machine Learning is to construct algorithms able to generalize and predict in previously unseen situations based on some set of examples. Given some finite information, Machine Learning provides ways to exract knowledge, describe, explain and predict from data. Kernel Methods are one of the most successful branches of Machine Learning. They allow applying linear algorithms with well-founded properties such as generalization ability, to non-linear real-life problems. Support Vector Machine is a well-known example of a kernel method, which has found a wide range of applications in data analysis nowadays. In many practical applications, some additional prior knowledge is often available. This can be the knowledge about the data domain, invariant transformations, inner geometrical structures in data, some properties of the underlying process, etc. If used smartly, this information can provide significant improvement to any data processing algorithm. Thus, it is important to develop methods for incorporating prior knowledge into data-dependent models. The main objective of this thesis is to investigate approaches towards learning with kernel methods using prior knowledge. Invariant learning with kernel methods is considered in more details. In the first part of the thesis, kernels are developed which incorporate prior knowledge on invariant transformations. They apply when the desired transformation produce an object around every example, assuming that all points in the given object share the same class. Different types of objects, including hard geometrical objects and distributions are considered. These kernels were then applied for images classification with Support Vector Machines. Next, algorithms which specifically include prior knowledge are considered. An algorithm which linearly classifies distributions by their domain was developed. It is constructed such that it allows to apply kernels to solve non-linear tasks. Thus, it combines the discriminative power of support vector machines and the well-developed framework of generative models. It can be applied to a number of real-life tasks which include data represented as distributions. In the last part of the thesis, the use of unlabelled data as a source of prior knowledge is considered. The technique of modelling the unlabelled data with a graph is taken as a baseline from semi-supervised manifold learning. For classification problems, we use this apporach for building graph models of invariant manifolds. For regression problems, we use unlabelled data to take into account the inner geometry of the input space. To conclude, in this thesis we developed a number of approaches for incorporating some prior knowledge into kernel methods. We proposed invariant kernels for existing algorithms, developed new algorithms and adapted a technique taken from semi-supervised learning for invariant learning. In all these cases, links with related state-of-the-art approaches were investigated. Several illustrative experiments were carried out on real data on optical character recognition, face image classification, brain-computer interfaces, and a number of benchmark and synthetic datasets.

Since the 2008 Global Financial Crisis, the financial market has become more unpredictable than ever before, and it seems set to remain so in the forseeable future. This means an investor faces unprecedented risks, hence the increasing need for robust portfolio optimization to protect them against uncertainty, which is potentially devastating if unattended yet ignored in the classical Markowitz model, whose another deficiency is the absence of higher moments in its assumption of the distribution of asset returns. We establish an equivalence between the Markowitz model and the portfolio return value-at-risk optimization problem under multivariate normality of asset returns, so that we can add these excluded features into the former implicitly by incorporating them into the latter. We also provide a probabilistic smoothing spline approximation method and a deterministic model within the location-scale framework under elliptical distribution of the asset returns to solve the robust portfolio return value-at-risk optimization problem. In particular for the deterministic model, we introduce a novel eigendecomposition uncertainty set which lives in the positive definite space for the scale matrix without compromising on the computational complexity and conservativeness of the optimization problem, invent a method to determine the size of the involved uncertainty sets, test it out on real data, and explore its diversification properties. Although the value-at-risk has been the standard risk measure adopted by the banking and insurance industry since the early nineties, it has since attracted many criticisms, in particular from McNeil et al. (2005) and the Basel Committee on Banking Supervision in 2012, also known as Basel 3.5. Basel 4 even suggests a move away from the

`what" value-at-risk to the `

what-if" conditional value-at-risk' measure. We shall see that the former may be replaced with the latter or even other risk measures in our formulations easily.In this article, we establish novel decompositions of Gaussian fields taking values in suitable spaces of generalized functions, and then use these decompositions to prove results about Gaussian multiplicative chaos. We prove two decomposition theorems. The first one is a global one and says that if the difference between the covariance kernels of two Gaussian fields, taking values in some Sobolev space, has suitable Sobolev regularity, then these fields differ by a Holder continuous Gaussian process. Our second decomposition theorem is more specialized and is in the setting of Gaussian fields whose covariance kernel has a logarithmic singularity on the diagonal-or log-correlated Gaussian fields. The theorem states that any log-correlated Gaussian field X can be decomposed locally into a sum of a Holder continuous function and an independent almost *-scale invariant field (a special class of stationary log-correlated fields with 'cone-like' white noise representations). This decomposition holds whenever the term g in the covariance kernel C-X(x, y) = log(1/vertical bar x - y vertical bar) + g(x, y) has locally Hd+epsilon Sobolev smoothness. We use these decompositions to extend several results that have been known basically only for *-scale invariant fields to general log-correlated fields. These include the existence of critical multiplicative chaos, analytic continuation of the subcritical chaos in the so-called inverse temperature parameter beta, as well as generalised Onsager-type covariance inequalities which play a role in the study of imaginary multiplicative chaos.

2019Séances de cours associées (5)