**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Statistical Inference for Inverse Problems: From Sparsity-Based Methods to Neural Networks

Abstract

In inverse problems, the task is to reconstruct an unknown signal from its possibly noise-corrupted measurements. Penalized-likelihood-based estimation and Bayesian estimation are two powerful statistical paradigms for the resolution of such problems. They allow one to exploit prior information about the signal as well as handle the noisy measurements in a principled manner. This thesis is dedicated to the development of novel signal-reconstruction methods within these paradigms, ranging from those that involve classical sparsity-based signal models to those that leverage neural networks. In the first part of the thesis, we focus on sparse signal models in the context of linear inverse problems for one-dimensional (1D) signals. As our first contribution, we devise an algorithm for solving generalized-interpolation problems with $L_p$-norm regularization. Through a series of experiments, we examine features induced by this regularization, namely, sparsity, regularity, and oscillatory behaviour, which gives us new insight about it. As our second contribution, we present a framework based on 1D sparse stochastic processes to objectively evaluate and compare the performance of signal-reconstruction algorithms. Specifically, we derive efficient Gibbs sampling schemes to compute the minimum mean-square-error estimators for these processes. This allows us to specify a quantitative measure of the degree of optimality for any given method. Our framework also provides access to arbitrarily many training data, thus enabling the benchmarking of neural-network-based approaches. The second part of the thesis is devoted to neural networks which have become the focus of much of the current research in inverse problems as they typically outperform the classical sparsity-based methods. First, we develop an efficient module for the learning of component-wise continuous piecewise-linear activation functions in neural networks. We deploy this module to train 1-Lipschitz denoising convolutional neural networks and learnable convex regularizers, both of which can be used to design provably convergent iterative reconstruction methods. Next, we design a complete Bayesian inference pipeline for nonlinear inverse problems that leverages the power of deep generative signal models to produce high-quality reconstructions together with uncertainty maps. Finally, we propose a neural-network-based spatiotemporal regularization scheme for dynamic Fourier ptychography (FP), where the goal is to recover a sequence of high-resolution images from several low-resolution intensity measurements. Our approach does not require training data and yields state-of-the-art reconstructions.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (32)

Related MOOCs (26)

Related publications (83)

Recurrent neural network

A recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Artificial neural network

Artificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.

Compressed sensing

Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.

Neuronal Dynamics - Computational Neuroscience of Single Neurons

The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.

Neuronal Dynamics - Computational Neuroscience of Single Neurons

The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.

Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition

This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.

Deep neural networks have become ubiquitous in today's technological landscape, finding their way in a vast array of applications. Deep supervised learning, which relies on large labeled datasets, has been particularly successful in areas such as image cla ...

Michaël Unser, Pakshal Narendra Bohra

We present a statistical framework to benchmark the performance of reconstruction algorithms for linear inverse problems, in particular, neural-network-based methods that require large quantities of training data. We generate synthetic signals as realizati ...

In the rapidly evolving landscape of machine learning research, neural networks stand out with their ever-expanding number of parameters and reliance on increasingly large datasets. The financial cost and computational resources required for the training p ...