Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form
where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small,
where the residual after calculating and is denoted by
If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is
where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used.
For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals .
By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.
Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in , signal processing, machine learning, medical imaging, and more. Consider a linear system of equations , where is an underdetermined matrix and . The matrix (typically assumed to be full-rank) is referred to as the dictionary, and is a signal of interest.
Basis pursuit is the mathematical optimization problem of the form where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transform matrix (usually measurement matrix) and M < N. It is usually applied in cases where there is an underdetermined system of linear equations y = Ax that must be exactly satisfied, and the sparsest solution in the L1 sense is desired. When it is desirable to trade off exact equality of Ax and y in exchange for a sparser x, basis pursuit denoising is preferred.
The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a
Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization
Implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still do not understand how INRs represent signals. We propose a ...
IEEE COMPUTER SOC2022
, , ,
Single-photon 3D cameras can record the time-of-arrival of billions of photons per second with picosecond accuracy. One common approach to summarize the photon data stream is to build a per-pixel timestamp histogram, resulting in a 3D histogram tensor that ...
Ieee Computer Soc2023
,
The usual explanation of the efficacy of wavelet-based methods hinges on the sparsity of many real-world objects in the wavelet domain. Yet, standard wavelet-shrinkage techniques for sparse reconstruction are not competitive in practice, one reason being t ...