**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Sampling innovations

Abstract

Sampling theory has prospered extensively in the last century. The elegant mathematics and the vast number of applications are the reasons for its popularity. The applications involved in this thesis are in signal processing and communications and call out to mathematical notions in Fourier theory, spectral analysis, basic linear algebra, spline and wavelet theory. This thesis is divided in two parts. Chapters 2 and 3 consider uniform sampling of non-bandlimited signals and Chapters 4, 5, and 6 treat different irregular sampling problems. In the first part we address the problem of sampling signals that are not bandlimited but are characterized as having a finite number of degrees of freedom per unit of time. These signals will be called signals with a finite rate of innovation. We show that these signals can be uniquely represented given a sufficient number of samples obtained using an appropriate sampling kernel. The number of samples must be greater or equal to the degrees of freedom of the signal; in other words, the sampling rate must be greater or equal to the rate of innovation of the signal. In particular, we derive sampling schemes for periodic and finite length streams of Diracs and piecewise polynomial signals using the sinc, the differentiated sinc and the Gaussian kernels. Sampling and reconstruction of piecewise bandlimited signals and filtered piecewise polynomials is also considered. We also derive local reconstruction schemes for infinite length piecewise polynomials with a finite "local" rate of innovation using compact support kernels such as splines. Numerical experiments on all of the reconstruction schemes are shown. The first topic of the second part of this thesis is the irregular sampling problem of bandlimited signals with unknown sampling instances. The locations of the irregular set of samples are found by treating the problem as a combinatorial optimization problem. Search methods for the locations are described and numerical simulations on a random set and a jittery set of locations are made. The second topic is the periodic nonuniform sampling problem of bandlimited signals. The irregular set of samples involved has a structure which is irregular yet periodic. We develop a fast scheme that reduces the complexity of the problem by exploiting the special pattern of the locations. The motivation for developing a fast scheme originates from the fact that the periodic nonuniform set was also considered in the sampling with unknown locations problem and that a fast search method for the locations was sought. Finally, the last topic is the irregular sampling of signals that are linearly and nonlinearly approximated using Fourier and wavelet bases. We present variants of the Papoulis Gerchberg algorithm which take into account the information given in the approximation of the signal. Numerical experiments are presented in the context of erasure correction.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (13)

Related publications (21)

Sampling (signal processing)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples".
A sample is a val

Signal

In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages b

Nyquist–Shannon sampling theorem

The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called

Loading

Loading

Loading

Thierry Blu, Pina Marziliano, Martin Vetterli

Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we showthat they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems.

Over the past decade researches in applied mathematics, signal processing and communications have introduced compressive sampling (CS) as an alternative to the Shannon sampling theorem. The two key observations making CS theory widely applicable to numerous areas of signal processing are: i) due to their structural properties, natural signals typically have sparse representations in properly chosen orthogonal bases, ii) the number of linear non-adaptive measurements required to acquire high-dimensional data with CS is proportional to the signal’s sparsity level in the chosen basis. In multichannel signal applications the data of different channels is often highly correlated and therefore the unstructured sparsity hypothesis deployed by the classical CS theory results in suboptimal measurement rates. Meanwhile, the wide range of applications of multichannel signals and the extremely large and increasing flow of data in those applications motivates the development of more comprehensive models incorporating both inter and intra-channel data structures in order to achieve more efficient dimensionality reduction. The main focus of this thesis is on studying two new models for efficient multichannel signal compressed sensing. Our first approach proposes a simultaneous low-rank and joint-sparse matrix model for multichannel signals. As a result, we introduce a novel CS recovery scheme based on Nuclear-l2/l1 norm convex minimization for low-rank and joint-sparse matrix approximation. Our theoretical analysis indicates that using this approach can achieve significantly lower sampling rates for a robust multichannel data CS acquisition than state-of-the-art methods. More remarkably, our analysis confirms the near-optimality of this approach: the number of CS measurements are nearly proportional to the few degrees of freedom of such structured data. Our second approach introduces a stronger model for multichannel data synthesized by a linear mixture model. Here we assume that the mixture parameters are given as side information. As a result, multichannel data CS recovery turns into a compressive source separation problem, where we propose a novel decorrelating scheme to exploit the knowledge of mixture parameters for a robust and numerically efficient source identification. Our theoretical guarantees explain the fundamental limits of this approach in terms of the number of CS measurements, the sparsity level of sources, the sampling noise, and the conditioning of the mixture parameters. We apply these two approaches to compressive hyperspectral image recovery and source separation, and compare the efficiency of our methods to state-of-the-art approaches for several challenging real-world hyperspectral datasets. Note that applications of these methods are not limited to hyperspectral imagery and it can have a broad impact on numerous multichannel signal applications. As an example, for sensor network applications deploying compressive sampling schemes, our results indicate a tight tradeoff between the number of available sensors (channels) and the complexity/cost of each sensor. Finally, in a different multichannel signal application, we deal with a simple but very important problem in computer vision namely the detection and localization of people given their multi-view silhouettes captured by networks of cameras. The main challenge in many existing solutions is the tradeoff between robustness and numerical efficiency. We model this problem by a boolean (non-linear) inverse problem where, by penalizing the sparsity of the solution, we achieve accurate results comparable to state-of-the-art methods. More remarkably, using boolean arithmetics enables us to propose a real-time and memory efficient approximation algorithm that is mainly rooted in the classical literature of group testing and set cover.

Foundations of signal processing are heavily based on Shannon's sampling theorem for acquisition, representation and reconstruction. This theorem states that signals should not contain frequency components higher than the Nyquist rate, which is half of the sampling rate. Then, the signal can be perfectly reconstructed from its samples. Increasing evidence shows that the requirements imposed by Shannon's sampling theorem are too conservative for many naturally-occurring signals, which can be accurately characterized by sparse representations that require lower sampling rates closer to the signal's intrinsic information rates. Finite rate of innovation (FRI) is a new theory that allows to extract underlying sparse signal representations while operating at a reduced sampling rate. The goal of this PhD work is to advance reconstruction techniques for sparse signal representations from both theoretical and practical points of view. Specifically, the FRI framework is extended to deal with applications that involve temporal and spatial localization of events, including inverse source problems from radiating fields. We propose a novel reconstruction method using a model-fitting approach that is based on minimizing the fitting error subject to an underlying annihilation system given by the Prony's method. First, we showed that this is related to the problem known as structured low-rank matrix approximation as in structured total least squares problem. Then, we proposed to solve our problem under three different constraints using the iterative quadratic maximum likelihood algorithm. Our analysis and simulation results indicate that the proposed algorithms improve the robustness of the results with respect to common FRI reconstruction schemes. We have further developed the model-fitting approach to analyze spontaneous brain activity as measured by functional magnetic resonance imaging (fMRI). For this, we considered the noisy fMRI time course for every voxel as a convolution between an underlying activity inducing signal (i.e., a stream of Diracs) and the hemodynamic response function (HRF). We then validated this method using experimental fMRI data acquired during an event-related study. The results showed for the first time evidence for the practical usage of FRI for fMRI data analysis. We also addressed the problem of retrieving a sparse source distribution from the boundary measurements of a radiating field. First, based on Green's theorem, we proposed a sensing principle that allows to relate the boundary measurements to the source distribution. We focused on characterizing these sensing functions with particular attention for those that can be derived from holomorphic functions as they allow to control spatial decay of the sensing functions. With this selection, we developed an FRI-inspired non-iterative reconstruction algorithm. Finally, we developed an extension to the sensing principle (termed eigensensing) where we choose the spatial eigenfunctions of the Laplace operator as the sensing functions. With this extension, we showed that eigensensing principle allows to extract partial Fourier measurements of the source functions from boundary measurements. We considered photoacoustic tomography as a potential application of these theoretical developments.