**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Variational Methods For Continuous-Domain Inverse Problems: the Quest for the Sparsest Solution

Abstract

The goal of this thesis is to study continuous-domain inverse problems for the reconstruction of sparse signals and to develop efficient algorithms to solve such problems computationally. The task is to recover a signal of interest as a continuous function from a finite number of measurements. This problem being severely ill-posed, we choose to favor sparse reconstructions. We achieve this by formulating an optimization problem with a regularization term involving the total-variation (TV) norm for measures. However, such problems often lead to nonunique solutions, some of which, contrary to expectations, may not be sparse. This requires particular care to assert that we reach a desired sparse solution.Our contributions are divided into three parts. In the first part, we propose exact discretization methods for large classes of TV-based problems with generic measurement operators for one-dimensional signals. Our methods are based on representer theorems that state that our problems have spline solutions. Our approach thus consists in performing an exact discretization of the problems in spline bases, and we propose algorithms which ensure that we reach a desired sparse solution. We then extend this approach to signals that are expressed as a sum of components with different characteristics. We either consider signals whose components are sparse in different bases or signals whose first component is sparse, and the other is smooth. In the second part, we consider more specific TV-based problems and focus on the identification of cases of uniqueness. Moreover, in cases of nonuniqueness, we provide a precise description of the solution set, and more specifically of the sparsest solutions. We then leverage this theoretical study to design efficient algorithms that reach such a solution. In this line, we consider the problem of interpolating one-dimensional data points with second-order TV regularization. We also study this same problem with an added Lipschitz constraint to favor stable solutions. Finally, we consider the problem of the recovery of periodic splines with low-frequency Fourier measurements, which we prove to always have a unique solution.In the third and final part, we apply our sparsity-based frameworks to various real-world problems. Our first application is a method for the fitting of sparse curves to contour data. Finally, we propose an image-reconstruction method for scanning transmission X-ray microscopy.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (34)

Related MOOCs (32)

Related publications (100)

Compressed sensing

Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.

Sparse approximation

Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in , signal processing, machine learning, medical imaging, and more. Consider a linear system of equations , where is an underdetermined matrix and . The matrix (typically assumed to be full-rank) is referred to as the dictionary, and is a signal of interest.

Sparse dictionary learning

Sparse dictionary learning (also known as sparse coding or SDL) is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed.

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Digital Signal Processing III

Advanced topics: this module covers real-time audio processing (with
examples on a hardware board), image processing and communication system design.

Non-convex constrained optimization problems have become a powerful framework for modeling a wide range of machine learning problems, with applications in k-means clustering, large- scale semidefinite programs (SDPs), and various other tasks. As the perfor ...

In this thesis, we reveal that supervised learning and inverse problems share similar mathematical foundations. Consequently, we are able to present a unified variational view of these tasks that we formulate as optimization problems posed over infinite-di ...

One of the main goal of Artificial Intelligence is to develop models capable of providing valuable predictions in real-world environments. In particular, Machine Learning (ML) seeks to design such models by learning from examples coming from this same envi ...