**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Signal reconstruction

Summary

In signal processing, reconstruction usually means the determination of an original continuous signal from a sequence of equally spaced samples.
This article takes a generalized abstract mathematical approach to signal sampling and reconstruction. For a more practical approach based on band-limited signals, see Whittaker–Shannon interpolation formula.
General principle
Let F be any sampling method, i.e. a linear map from the Hilbert space of square-integrable functions L^2 to complex space \mathbb C^n.
In our example, the vector space of sampled signals \mathbb C^n is n-dimensional complex space. Any proposed inverse R of F (reconstruction formula, in the lingo) would have to map \mathbb C^n to some subset of L^2. We could choose this subset arbitrarily, but if we're going to want a reconstruction formula R that is also a linear map, then we have to choose an n-dimensional linear subspace of L^2

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (24)

Loading

Loading

Loading

Related people (3)

Related concepts (1)

The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called

Related courses (6)

Ce cours vise à transférer les concepts théoriques et les savoir-faire nécessaires à la réalisation de mesures de bonne qualité. Les contenus méthodologiques et technologiques seront exposés sous forme ex-cathedra et les savoir-faire seront entrainés lors des travaux pratiques.

This class teaches the theory of linear time-invariant (LTI) systems. These systems serve both as models of physical reality (such as the wireless channel) and as engineered systems (such as electrical circuits, filters and control strategies).

On introduit les bases de l'automatique linéaire discrète qui consiste à appliquer une commande sur des intervalles uniformément espacés. La cadence de l'échantillonnage qui est associée joue un rôle primordial pour assurer la stabilité et la performance du système bouclé.

Related units (4)

Related lectures (12)

Xin Liu, Jiaqi Liu, Hanjie Pan, Feng Xue

Recently, the type of compound regularizers has become a popular choice for signal reconstruction. The estimation quality is generally sensitive to the values of multiple regularization parameters. In this work, based on BDF algorithm, we develop a data-driven optimization scheme based on minimization of Stein's unbiased risk estimate (SURE) statistically equivalent to mean squared error (MSE). We propose a recursive evaluation of SURE to monitor the MSE during BDF iteration; the optimal values of the multiple parameters are then identified by the minimum SURE. Monte-Carlo simulation is applied to compute SURE for large-scale data. We exemplify the proposed method with image deconvolution. Numerical experiments show that the proposed method leads to highly accurate estimates of regularization parameters and nearly optimal restoration performance.

Mikhail Kapralov, Amir Zandieh

Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. We are often interested in signals with "simple" Fourier structure - e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies i.e., bandlimited, sparse, and multiband signals, respectively. More broadly, any prior knowledge on a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct. We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction. Surprisingly, we also show that, up to log factors, a universal non-uniform sampling strategy can achieve this optimal complexity for any class of signals. We present an efficient and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the state-of-the-art, while providing the the first computationally and sample efficient solution to a broader range of problems, including multiband signal reconstruction and Gaussian process regression tasks in one dimension. Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and column-based matrix reconstruction to the approximation of continuous linear operators that arise in the signal reconstruction problem. We believe these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods.

This thesis focuses on the development of novel multiresolution image approximations. Specifically, we present two kinds of generalization of multiresolution techniques: image reduction for arbitrary scales, and nonlinear approximations using other metrics than the standard Euclidean one. Traditional multiresolution decompositions are restricted to dyadic scales. As first contribution of this thesis, we develop a method that goes beyond this restriction and that is well suited to arbitrary scale-change computations. The key component is a new and numerically exact algorithm for computing inner products between a continuously defined signal and B-splines of any order and of arbitrary sizes. The technique can also be applied for non-uniform to uniform grid conversion, which is another approximation problem where our method excels. Main applications are resampling and signal reconstruction. Although simple to implement, least-squares approximations lead to artifacts that could be reduced if nonlinear methods would be used instead. The second contribution of the thesis is the development of nonlinear spline pyramids that are optimal for lp-norms. First, we introduce a Banach-space formulation of the problem and show that the solution is well defined. Second, we compute the lp-approximation thanks to an iterative optimization algorithm based on digital filtering. We conclude that l1-approximations reduce the artifacts that are inherent to least-squares methods; in particular, edge blurring and ringing. In addition, we observe that the error of l1-approximations is sparser. Finally, we derive an exact formula for the asymptotic Lp-error; this result justifies using the least-squares approximation as initial solution for the iterative optimization algorithm when the degree of the spline is even; otherwise, one has to include an appropriate correction term. The theoretical background of the thesis includes the modelisation of images in a continuous/discrete formalism and takes advantage of the approximation theory of linear shift-invariant operators. We have chosen B-splines as basis functions because of their nice properties. We also propose a new graphical formalism that links B-splines, finite differences, differential operators, and arbitrary scale changes.