Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.
Over the past decade researches in applied mathematics, signal processing and communications have introduced compressive sampling (CS) as an alternative to the Shannon sampling theorem. The two key observations making CS theory widely applicable to numerous areas of signal processing are: i) due to their structural properties, natural signals typically have sparse representations in properly chosen orthogonal bases, ii) the number of linear non-adaptive measurements required to acquire high-dimensional data with CS is proportional to the signal’s sparsity level in the chosen basis. In multichannel signal applications the data of different channels is often highly correlated and therefore the unstructured sparsity hypothesis deployed by the classical CS theory results in suboptimal measurement rates. Meanwhile, the wide range of applications of multichannel signals and the extremely large and increasing flow of data in those applications motivates the development of more comprehensive models incorporating both inter and intra-channel data structures in order to achieve more efficient dimensionality reduction. The main focus of this thesis is on studying two new models for efficient multichannel signal compressed sensing. Our first approach proposes a simultaneous low-rank and joint-sparse matrix model for multichannel signals. As a result, we introduce a novel CS recovery scheme based on Nuclear-l2/l1 norm convex minimization for low-rank and joint-sparse matrix approximation. Our theoretical analysis indicates that using this approach can achieve significantly lower sampling rates for a robust multichannel data CS acquisition than state-of-the-art methods. More remarkably, our analysis confirms the near-optimality of this approach: the number of CS measurements are nearly proportional to the few degrees of freedom of such structured data. Our second approach introduces a stronger model for multichannel data synthesized by a linear mixture model. Here we assume that the mixture parameters are given as side information. As a result, multichannel data CS recovery turns into a compressive source separation problem, where we propose a novel decorrelating scheme to exploit the knowledge of mixture parameters for a robust and numerically efficient source identification. Our theoretical guarantees explain the fundamental limits of this approach in terms of the number of CS measurements, the sparsity level of sources, the sampling noise, and the conditioning of the mixture parameters. We apply these two approaches to compressive hyperspectral image recovery and source separation, and compare the efficiency of our methods to state-of-the-art approaches for several challenging real-world hyperspectral datasets. Note that applications of these methods are not limited to hyperspectral imagery and it can have a broad impact on numerous multichannel signal applications. As an example, for sensor network applications deploying compressive sampling schemes, our results indicate a tight tradeoff between the number of available sensors (channels) and the complexity/cost of each sensor. Finally, in a different multichannel signal application, we deal with a simple but very important problem in computer vision namely the detection and localization of people given their multi-view silhouettes captured by networks of cameras. The main challenge in many existing solutions is the tradeoff between robustness and numerical efficiency. We model this problem by a boolean (non-linear) inverse problem where, by penalizing the sparsity of the solution, we achieve accurate results comparable to state-of-the-art methods. More remarkably, using boolean arithmetics enables us to propose a real-time and memory efficient approximation algorithm that is mainly rooted in the classical literature of group testing and set cover.
Loading
Loading
Loading
Loading
Loading
Multilinear low-rank tensors on graphs (MLRTG)' which states that a tensor can be encoded as a multilinear combination of the low-frequency graph eigenvectors, where the graphs are constructed along the various modes of the tensor. Since the graph eigenvectors have the interpretation of \textit{non-linear} embedding of a dataset on the low-dimensional manifold, we propose a method called
Graph Multilinear SVD (GMLSVD)' to recover PCA based linear subspaces from these eigenvectors. Finally, we propose a plethora of highly scalable matrix and tensor based problems for low-rank extraction which implicitly or explicitly make use of the GMLSVD framework. The core idea is to replace the expensive iterative SVD operations by updating the linear subspaces from the fixed non-linear ones via low-cost operations. We present applications in low-rank and sparse decomposition and clustering of the low-rank features to evaluate all the proposed methods. Our theoretical analysis shows that the approximation error of the proposed framework depends on the spectral properties of the graph Laplacians