Résumé
Sparse dictionary learning (also known as sparse coding or SDL) is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation. One of the most important applications of sparse dictionary learning is in the field of compressed sensing or signal recovery. In compressed sensing, a high-dimensional signal can be recovered with only a few linear measurements provided that the signal is sparse or nearly sparse. Since not all signals satisfy this sparsity condition, it is of great importance to find a sparse representation of that signal such as the wavelet transform or the directional gradient of a rasterized matrix. Once a matrix or a high dimensional vector is transferred to a sparse space, different recovery algorithms like basis pursuit, CoSaMP or fast non-iterative algorithms can be used to recover the signal. One of the key principles of dictionary learning is that the dictionary has to be inferred from the input data. The emergence of sparse dictionary learning methods was stimulated by the fact that in signal processing one typically wants to represent the input data using as few components as possible. Before this approach the general practice was to use predefined dictionaries (such as Fourier or wavelet transforms). However, in certain cases a dictionary that is trained to fit the input data can significantly improve the sparsity, which has applications in data decomposition, compression and analysis and has been used in the fields of image denoising and , video and audio processing.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Publications associées (5)

Chargement

Chargement

Chargement

Afficher plus
Unités associées

Chargement

Concepts associés (6)
Acquisition comprimée
L'acquisition comprimée (en anglais compressed sensing) est une technique permettant de trouver la solution la plus parcimonieuse d'un système linéaire sous-déterminé. Elle englobe non seulement les moyens pour trouver cette solution mais aussi les systèmes linéaires qui sont admissibles. En anglais, elle porte le nom de Compressive sensing, Compressed Sampling ou Sparse Sampling.
K-moyennes
Le partitionnement en k-moyennes (ou k-means en anglais) est une méthode de partitionnement de données et un problème d'optimisation combinatoire. Étant donnés des points et un entier k, le problème est de diviser les points en k groupes, souvent appelés clusters, de façon à minimiser une certaine fonction. On considère la distance d'un point à la moyenne des points de son cluster ; la fonction à minimiser est la somme des carrés de ces distances.
Sparse dictionary learning
Sparse dictionary learning (also known as sparse coding or SDL) is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed.
Afficher plus
Cours associés (7)
CS-448: Sublinear algorithms for big data analysis
In this course we will define rigorous mathematical models for computing on large datasets, cover main algorithmic techniques that have been developed for sublinear (e.g. faster than linear time) data
EE-726: Sparse stochastic processes
We cover the theory and applications of sparse stochastic processes (SSP). SSP are solutions of differential equations driven by non-Gaussian innovations. They admit a parsimonious representation in a
NX-414: Brain-like computation and intelligence
Recent advances in machine learning have contributed to the emergence of powerful models for how humans and other animals reason and behave. In this course we will compare and contrast how such brain
Afficher plus
Séances de cours associées (40)
Estimation matricielle à pointes binaires
Explore l'estimation matricielle à pointes binaires, en analysant des équations cohérentes et des estimateurs bayésiens.
Descente progressive stochastique: Techniques d'optimisation
Explore la descente stochastique du gradient et les techniques d'optimisation non lisses pour la sparté et la détection de compression.
Modèles probabilistes pour la régression linéaire
Couvre le modèle probabiliste de régression linéaire et ses applications dans la résonance magnétique nucléaire et l'imagerie par rayons X.
Afficher plus
MOOCs associés

Aucun résultat