**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Comparison of non-parametric T2 relaxometry methods for myelin water quantification

Abstract

Multi-component T2 relaxometry allows probing tissue microstructure by assessing compartment-specific T2 relaxation times and water fractions, including the myelin water fraction. Non-negative least squares (NNLS) with zero-order Tikhonov regularization is the conventional method for estimating smooth T2 distributions. Despite the improved estimation provided by this method compared to non-regularized NNLS, the solution is still sensitive to the underlying noise and the regularization weight. This is especially relevant for clinically achievable signal-to-noise ratios. In the literature of inverse problems, various well-established approaches to promote smooth solutions, including first-order and second-order Tikhonov regularization, and different criteria for estimating the regularization weight have been proposed, such as L-curve, Generalized Cross-Validation, and Chi-square residual fitting. However, quantitative comparisons between the available reconstruction methods for computing the T2 distribution, and between different approaches for selecting the optimal regularization weight, are lacking. In this study, we implemented and evaluated ten reconstruction algorithms, resulting from the individual combinations of three penalty terms with three criteria to estimate the regularization weight, plus non-regularized NNLS. Their performance was evaluated both in simulated data and real brain MRI data acquired from healthy volunteers through a scan-rescan repeatability analysis. Our findings demonstrate the need for regularization. As a result of this work, we provide a list of recommendations for selecting the optimal reconstruction algorithms based on the acquired data. Moreover, the implemented methods were packaged in a freely distributed toolbox to promote reproducible research, and to facilitate further research and the use of this promising quantitative technique in clinical practice.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (10)

Related publications (2)

Data

In common usage and statistics, data (USˈdætə; UKˈdeɪtə) is a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic uni

Inverse problem

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source r

Estimation

Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The v

Loading

Loading

A sensor is a device that detects or measures a physical property and records, indicates, or otherwise responds to it. In other words, a sensor allows us to interact with the surrounding environment, by measuring qualitatively or quantitatively a given phenomena. Biological evolution provided every living entity with a set of sensors to ease the survival to daily challenges. In addition to the biological sensors, humans developed and designed “artificial” sensors with the aim of improving our capacity of sensing the real world. Today, thanks to technological developments, sensors are ubiquitous and thus, we measure an exponentially growing amount of data. Here is the challenge—how do we process and use this data? Nowadays, it is common to design real-world sensing architectures that use the measured data to estimate certain parameters of the measured physical field. This type of problems are known in mathematics as inverse problems and finding their solution is challenging. In fact, we estimate a set of parameters of a physical field with possibly infinite degrees of freedom with only a few measurements, that are most likely corrupted by noise. Therefore, we would like to design algorithms to solve the given inverse problem, while ensuring the existence of the solution, its uniqueness and its robustness to the measurement noise. In this thesis, we tackle different inverse problems, all inspired by real-world applications. First, we propose a new regularization technique for linear inverse problems based on the sensor placement optimization of the sensor network collecting the data. We propose Frame- Sense, a greedy algorithm inspired by frame theory that finds a near-optimal sensor placement with respect to the reconstruction error of the inverse problem solution in polynomial time. We substantiate our theoretical findings with numerical simulations showing that our method improves the state of the art. In particular, we show significant improvements on two realworld applications: the thermal monitoring of many-core processors and the adaptive sampling scheduling of environmental sensor networks. Second, we introduce the dual of the sensor placement problem, namely the source placement problem. In this case, instead of regularizing the inverse problem, we enable a precise control of the physical field by means of a forward problem. For this problem, we propose a near-optimal algorithm for the noiseless case, that is when we know exactly the current state of the physical field. Third, we consider a family of physical phenomena that can be modeled by means of graphs, where the nodes represent a set of entities and the edges model the transmission delay of an information between the entities. Examples of this phenomena are the spreading of a virus within the population of a given region or the spreading of a rumor on a social network. In this scenario, we identify two new key problems: the source placement and vaccination. For the former, we would like to find a set of sources such that the spreading of the information over the network is as fast as possible. For the latter, we look for an optimal set of nodes to be “vaccinated” such that the spreading of the virus is the slowest. For both problems, we propose greedy algorithms directly optimizing the average time of infection of the network. Such algorithms out-perform the current state of the art and we evaluate their performance with a set of experiments on synthetic datasets. Then, we discuss three distinct inverse problems for physical fields characterized by a diffusive phenomena, such as temperature of solid bodies or the dispersion of pollution in the atmosphere. We first study the uniform sampling and reconstruction of diffusion fields and we show that we can exploit the kernel of the field to control and bound the aliasing error. Second, we study the source estimation of a diffusive field given a set of spatio-temporal measurements of the field and under the assumption that the sources can be modeled as a set of Dirac’s deltas. For this estimation problem, we propose an algorithm that exploits the eigenfunctions representation of the diffusion field and we show that this algorithm recovers the sources precisely. Third, we propose an algorithm for the estimation of time-varying emissions of smokestacks from the data collected in the surrounding environment by a sensor network, under the assumption that the emission rates can be modeled as signals lying on low-dimensional subspaces or with a finite rate of innovation. Last, we analyze a classic non-linear inverse problem, namely the sparse phase retrieval. In such a problem, we would like to estimate a signal from just the magnitude of its Fourier transform. Phase retrieval is of interest for many scientific applications, such as X-ray crystallography and astronomy. We assume that the signal of interest is spatially sparse, as it happens for many applications, and we model it as a linear combination of Dirac’s delta. We derive sufficient conditions for the uniqueness of the solution based on the support of the autocorrelation function of the measured sparse signal. Finally, we propose a reconstruction algorithm for the sparse phase retrieval taking advantage of the sparsity of the signal of interest.

As of today, the extension of the human visual capabilities to machines remains both a cornerstone and an open challenge in the attempt to develop intelligent systems. On the one hand, the development of more and more sophisticated imaging devices, capable of sensing richer information than a plain perspective projection of the real world, is critical in order to allow machines to understand the complex environment around them. On the other hand, despite the advances in imaging, the complexity of the real world cannot be fully captured by a single imaging device, either due to intrinsic hardware limitations or to the environment complexity itself. As a consequence, the attempt to extend the human visual capabilities to machines requires inevitably to estimate some unknown quantities, which could not be measured, from the available captured data. Equivalently, imaging requires the solution of arbitrarily complex inverse problems.
In most scenarios, inverse problems are ill-posed and admit an infinite number of solutions, while only one or few of them are the desired ones. It becomes therefore crucial to reduce, equivalently \textit{to regularize}, the solution space by exploiting all the available prior information about the problem structure and, especially, about the target quantity to estimate. In this thesis we investigate the use of graph-based regularizers to encode our prior knowledge about the target quantity and to inject it directly into the inverse problem. In particular, we cast the inverse problem into an optimization task, where the target quantity is modelled as a graph whose topology captures our prior knowledge. In order to show the effectiveness and the flexibility of graph-based regularizers, we study their use in different inverse imaging problems, each one characterized by different geometrical constraints.
We start by investigating how to augment the resolution of a light field. In fact, although light field cameras permit to capture the 3D information in a scene within a single exposure, thus providing much richer information than a perspective camera, their compact design limits their spatial resolution dramatically. We present a smooth graph-based regularizer which models the geometry of a light field explicitly and we use it to augment the light field spatial resolution while relying only on the complementary information encoded in the low resolution light field views. In particular, we show that the use of a graph-based regularizer permits to enforce the light field geometric structure without the need for a precise and costly disparity estimation step.
Then we analyze the further benefits provided by adopting nonsmooth graph-based regularizers, as these better preserve edges and fine details than their smooth counterpart. In particular, we focus on a specific nonsmooth graph-based regularizer and show its effectiveness within two applications. The first application revolves again around light field super-resolution, which permits a comparison with the smooth regularizer adopted previously. The second applications is disparity estimation in omnidirectional stereo systems, where the two captured images and the target disparity map live on a spherical surface, hence a graph-based regularizer can be used to model the non trivial correlation underneath each signal.
Finally, we investigate the refinement of a depth map and the estimation of the corresponding normal map. In fact, ...