Publication

Learning approaches to high-fidelity optical diffraction tomography

Joowon Lim
2020
EPFL thesis
Abstract

Optical diffraction tomography (ODT) provides us 3D refractive index (RI) distributions of transparent samples. Since RI values differ across different materials, they serve as endogenous contrasts. It, therefore, enables us to image without pre-processing of labeling which can disturb samples during measurement. It has been utilized in various applications to study hematology, morphological parameters, biochemical information, and so on.

The fundamental principle of ODT reconstruction is to recover the 3D information from multiple 2D measurements. While we require 2D measurements acquired by fully scanning a sample, there exist missing measurements that we are not able to access due to the limited numerical apertures (NAs) in the optical system. This is called the missing cone problem since the parts which are not covered by the NAs form cone shapes. The missing cone problem degrades the final reconstruction by underestimating RI values and more severely elongating images along the optical axis. Another challenge in ODT reconstruction is to model the nonlinear relationship between a sample and the measurements. The first order of scattering is commonly considered while neglecting the other higher orders to linearize the relationship, however, this results in degradation of the final reconstruction as the higher orders of scattering become more pronounced.

In this thesis, we aim at solving the challenges in ODT reconstruction to provide more accurate quantitative information, namely, RI distributions. The first approach is based on model-based iterative reconstruction schemes. We choose the beam propagation method (BPM) for the forward model in order to capture the high orders of scattering. Due to the similarity of the multi-layer structure of the BPM with that of neural networks used in deep learning, we call this scheme learning tomography (LT). We rigorously investigate the performance of LT over the conventional linear model-based reconstruction scheme. Furthermore, by applying a more advanced BPM for the forward model, we even improve the LT and demonstrate the dramatically improved performance by both simulations and experiments.

The second approach is based on statistically learning artifacts present in final reconstructions using a deep neural network (DNN) from a large dataset. Unlike the previous approaches which require iterations, the DNN approach instantly reconstructs RI distributions. We demonstrate the use of DNN using red blood cells which are highly distorted by the missing cone problem. In order to overcome the lack of ground truth in 3D ODT reconstruction, we digitally generate a synthetic dataset. The reconstruction results from the network present highly accurate results for the synthetic test set. Most importantly, we obtain high-fidelity reconstructions of experimental data using the network trained only on the synthetic data.

Unlike other imaging modalities, ODT provides 3D quantitative information without labeling. To fully benefit from the capacity of quantitative imaging, it is critical to solve the existing challenges in ODT reconstruction to produce high-fidelity reconstructions. In this contribution, we aim to resolve the major challenges in ODT reconstruction using various learning approaches, and we believe that it can further improve ODT as a powerful tool for various applications.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (34)
Artificial neural network
Artificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.
Recurrent neural network
A recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.
Types of artificial neural networks
There are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Show more
Related publications (74)

Safe Deep Neural Networks

Kyle Michael Matoba

				The capabilities of deep learning systems have advanced much faster than our ability to understand them. Whilst the gains from deep neural networks (DNNs) are significant, they are accompanied by a growing risk and gravity of a bad outcome. This is tr ...
EPFL2024
Show more
Related MOOCs (29)
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neuronal Dynamics - Computational Neuroscience of Single Neurons
The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.