**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Matching pursuit

Summary

Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form
where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small,
where the residual after calculating and is denoted by
If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is
where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used.
For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals .
By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal .

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (28)

Related people (13)

Related concepts (4)

Related MOOCs (6)

Related units (4)

Related lectures (6)

Compressed sensing

Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.

Sparse approximation

Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in , signal processing, machine learning, medical imaging, and more. Consider a linear system of equations , where is an underdetermined matrix and . The matrix (typically assumed to be full-rank) is referred to as the dictionary, and is a signal of interest.

Basis pursuit

Basis pursuit is the mathematical optimization problem of the form where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transform matrix (usually measurement matrix) and M < N. It is usually applied in cases where there is an underdetermined system of linear equations y = Ax that must be exactly satisfied, and the sparsest solution in the L1 sense is desired. When it is desirable to trade off exact equality of Ax and y in exchange for a sparser x, basis pursuit denoising is preferred.

Digital Signal Processing [retired]

The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Explores primal-dual optimization methods, algorithms, convergence, and applications in nonconvex optimization and image deconvolution.

Explores primal-dual optimization with a focus on Lagrangian methods and their convergence, drawbacks, and enhancements.

Explores data compression through signal sparsity, questioning the need for recording vast amounts of data.

Pascal Frossard, Pierre Vandergheynst, Adel Rahmoune

This paper introduces a novel algorithm for sparse approximation in redundant dictionaries called the M-term pursuit (MTP). This algorithm decomposes a signal into a linear combination of atoms that are selected in order to represent the main signal components. The MTP algorithm provides an adaptive representation for signals in any complete dictionary. The basic idea behind the MTP is to partition the dictionary into L quasi-disjoint subdictionaries. A k-term signal approximation is then iteratively computed, where each iteration leads to the selection of M

With the flood of information available today the question how to deal with high dimensional data/signals, which are cumbersome to handle, to calculate with and to store, is highly important. One approach to reducing this flood is to find sparse signal representations, as a signal that is the linear combination of a few elements from a pool of building blocks, can be reduced to the few coefficients of this representation. If these building blocks form a basis, finding the sparse representation poses no problem but unfortunately not many signal classes are sparse in a basis. Taking more building blocks, i.e. a redundant dictionary, increases the chances of having sparse representations, but actually finding them becomes very hard. This led to the development of numerous strategies and algorithms for finding sparse representations, with varying complexity and success rate. The first part of the thesis deals with two of those algorithms, Thresholding and Matching Pursuit, from a more theoretical point of view. It is shown that both those greedy algorithms can be improved with a little trick, that does not increase their complexity, and that when considering their average instead of their worst case performance they perform quite well in comparison to more complex methods. The second part of thesis treats questions concerning the whole dictionary and its properties. First it gives more evidence that sparsity is useful by extending the concept of compressed sensing to signals that are sparse not in a basis but in a redundant dictionary. Thus to record a sparse signal it is not necessary to make as many measurements as the dimension of the signal but only a multiple of the number of dictionary elements used to represent it. Next we show that dictionaries cannot only provide sparse representations but that their geometric properties can also be exploited to model data structures. Here we explain how to model different subclasses of a class of signals by incoherent subspaces, present an algorithm to learn a dictionary made out of these subspaces and then use it for classification of faces. Finally we turn back to the sparse representation problem and study the fundamental question how to find a dictionary providing sparse representations. We pick up the idea to learn a dictionary via minimisation of a continuous cost function and provide conditions, guaranteeing that the decomposition of a collection of training signals into a dictionary and a coefficient matrix constitutes a local minimum. We also analyse statistically when these conditions are fulfilled with high probability.

Pierre Vandergheynst, Karin Schnass

This article presents an alteration of greedy algorithms like thresholding or (Orthogonal) Matching Pursuit which improves their performance in finding sparse signal representations in redundant dictionaries. These algorithms can be split into a sensing and a reconstruction step, and the former will fail to identify correct atoms if the cumulative coherence of the dictionary is too high. We thus modify the sensing step by introducing a special sensing matrix, also referred to as a measurement ensemble. The correct selection of components is then determined by the

2008