Publication

Topologically Better Delineation of Curvilinear Structures

Doruk Oner
2023
EPFL thesis
Abstract

Curvilinear structures are frequently observed in a variety of domains and are essential for comprehending neural circuits, detecting fractures in materials, and determining road and irrigation canal networks. It can be costly and time-consuming to manually extract these structures, so automated systems are required for faster and more accurate analysis. Recently deep learning-based approaches that rely solely on deep neural networks are being used for automatic delineation of such structures. However, preserving the topology of the curvilinear structures, which is essential for downstream applications, is a significant challenge. Furthermore, deep learning models require vast quantities of precisely annotated data, which is difficult to acquire, especially for 3D data. Commonly used pixel-wise loss functions cannot capture the topology of curvilinear structures and deep networks that are trained with such losses are prone to imprecisions in the annotations. In this thesis, we propose topology-aware loss functions to tackle these problems and improve the topology of the reconstructions.We begin by introducing a connectivity-oriented loss function for extracting network-like structures from 2D images. We express the connectivity of curvilinear structures in terms of disconnections that they create between background regions of the image. Our loss function is designed to prevent such unwanted connections between background regions, and therefore close the gaps in predicted structures. Then, we focus on using Persistent Homology (PH) to improve the topological quality of the reconstructions. We propose a new filtration technique by fusing two existing approaches: filtration by thresholding and height function. With the proposed technique, we include location information of topological errors and increase the descriptive power of PH. In order to tackle imprecise annotations, we propose an active contour model based loss function. We treat annotations as contour models and allow them to deform themselves over correct centerlines during training while preserving the topology of the structure. Our final contribution is extending the aforementioned connectivity-oriented loss function to work with 3D data as well. We achieve this by computing the loss on 2D projections. With this method, we also reduce the annotation effort required to provide training data. We demonstrate, in experiments on 2D (satellite images of roads and irrigation canals) and 3D (Magnetic Resonance Angiography, Two-Photon Microscopy) datasets, that our loss functions significantly improve the topological quality of the reconstructions.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (32)
Deep learning
Deep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Feedforward neural network
A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow.
Types of artificial neural networks
There are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Show more
Related publications (93)

Safe Deep Neural Networks

Kyle Michael Matoba

				The capabilities of deep learning systems have advanced much faster than our ability to understand them. Whilst the gains from deep neural networks (DNNs) are significant, they are accompanied by a growing risk and gravity of a bad outcome. This is tr ...
EPFL2024

Deep learning approach for identification of H II regions during reionization in 21-cm observations - II. Foreground contamination

Jean-Paul Richard Kneib, Emma Elizabeth Tolley, Tianyue Chen, Michele Bianco

The upcoming Square Kilometre Array Observatory will produce images of neutral hydrogen distribution during the epoch of reionization by observing the corresponding 21-cm signal. However, the 21-cm signal will be subject to instrumental limitations such as ...
Oxford Univ Press2024

Deep Learning Generalization with Limited and Noisy Labels

Mahsa Forouzesh

Deep neural networks have become ubiquitous in today's technological landscape, finding their way in a vast array of applications. Deep supervised learning, which relies on large labeled datasets, has been particularly successful in areas such as image cla ...
EPFL2023
Show more
Related MOOCs (23)
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neuronal Dynamics - Computational Neuroscience of Single Neurons
The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.