Summary
Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in , signal processing, machine learning, medical imaging, and more. Consider a linear system of equations , where is an underdetermined matrix and . The matrix (typically assumed to be full-rank) is referred to as the dictionary, and is a signal of interest. The core sparse representation problem is defined as the quest for the sparsest possible representation satisfying . Due to the underdetermined nature of , this linear system admits in general infinitely many possible solutions, and among these we seek the one with the fewest non-zeros. Put formally, we solve where is the pseudo-norm, which counts the number of non-zero components of . This problem is known to be NP-hard with a reduction to NP-complete subset selection problems in combinatorial optimization. Sparsity of implies that only a few () components in it are non-zero. The underlying motivation for such a sparse decomposition is the desire to provide the simplest possible explanation of as a linear combination of as few as possible columns from , also referred to as atoms. As such, the signal can be viewed as a molecule composed of a few fundamental elements taken from . While the above posed problem is indeed NP-Hard, its solution can often be found using approximation algorithms. One such option is a convex relaxation of the problem, obtained by using the -norm instead of , where simply sums the absolute values of the entries in . This is known as the basis pursuit (BP) algorithm, which can be handled using any linear programming solver. An alternative approximation method is a greedy technique, such as the matching pursuit (MP), which finds the location of the non-zeros one at a time.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (7)
EE-726: Sparse stochastic processes
We cover the theory and applications of sparse stochastic processes (SSP). SSP are solutions of differential equations driven by non-Gaussian innovations. They admit a parsimonious representation in a
PHYS-467: Machine learning for physicists
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
ME-371: Discretization methods in fluids
Ce cours présente une introduction aux méthodes d'approximation utilisées pour la simulation numérique en mécanique des fluides. Les concepts fondamentaux sont présentés dans le cadre de la méthode d
Show more
Related publications (380)
Related concepts (2)
Matching pursuit
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum.
Compressed sensing
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.