Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In inverse problems, the task is to reconstruct an unknown signal from its possibly noise-corrupted measurements. Penalized-likelihood-based estimation and Bayesian estimation are two powerful statistical paradigms for the resolution of such problems. They allow one to exploit prior information about the signal as well as handle the noisy measurements in a principled manner. This thesis is dedicated to the development of novel signal-reconstruction methods within these paradigms, ranging from those that involve classical sparsity-based signal models to those that leverage neural networks. In the first part of the thesis, we focus on sparse signal models in the context of linear inverse problems for one-dimensional (1D) signals. As our first contribution, we devise an algorithm for solving generalized-interpolation problems with -norm regularization. Through a series of experiments, we examine features induced by this regularization, namely, sparsity, regularity, and oscillatory behaviour, which gives us new insight about it. As our second contribution, we present a framework based on 1D sparse stochastic processes to objectively evaluate and compare the performance of signal-reconstruction algorithms. Specifically, we derive efficient Gibbs sampling schemes to compute the minimum mean-square-error estimators for these processes. This allows us to specify a quantitative measure of the degree of optimality for any given method. Our framework also provides access to arbitrarily many training data, thus enabling the benchmarking of neural-network-based approaches. The second part of the thesis is devoted to neural networks which have become the focus of much of the current research in inverse problems as they typically outperform the classical sparsity-based methods. First, we develop an efficient module for the learning of component-wise continuous piecewise-linear activation functions in neural networks. We deploy this module to train 1-Lipschitz denoising convolutional neural networks and learnable convex regularizers, both of which can be used to design provably convergent iterative reconstruction methods. Next, we design a complete Bayesian inference pipeline for nonlinear inverse problems that leverages the power of deep generative signal models to produce high-quality reconstructions together with uncertainty maps. Finally, we propose a neural-network-based spatiotemporal regularization scheme for dynamic Fourier ptychography (FP), where the goal is to recover a sequence of high-resolution images from several low-resolution intensity measurements. Our approach does not require training data and yields state-of-the-art reconstructions.
Michaël Unser, Pakshal Narendra Bohra