Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
Understanding behavior from neural activity is a fundamental goal in neuroscience. It has practical applications in building robust brain-machine interfaces, human-computer interaction, and assisting patients with neurological disabilities. Despite the ever-growing demand driven by the applications, neural action decoding methods still require laborious manual annotations. Furthermore, a different model must be trained for each new individual, requiring even more annotation and overwhelming the resources of most applications. This thesis proposes a self-supervised way to tackle the unsupervised domain adaptation for the neural action decoding problem. Our method does not require manual annotations and can generalize across individuals. To do that, we propose to use animal motion capture to guide the encoding of neural action representations together with a set of neural and behavioral augmentations. We, therefore, divide the thesis into two parts. For the first part, we propose multiple novel ways to learn robust markerless motion capture systems for animals. In the second part, we then use motion capture to learn an unsupervised neural action decoding algorithm.To start, we present a novel unsupervised markerless 2D pose estimation system. Extracting pose information from laboratory animals requires significant annotation labor. Our model requires only unrealistic renderings of the target animal, without any manual pose labeling. While deep learning models cannot be trained on unrealistic renderings of laboratory animals due to the domain gap, we propose a deformation-based generative network transforming the renderings into realistic images while at the same time keeping the pose information intact. Our model enables us to train a pose estimation network on generated but realistic images without manual annotations.Second, we propose a markerless self-calibrating motion capture system on tethered animals. Although extracting a 2D pose is enough for some applications, 2D pose information is incomplete due to information loss from the camera projection. On the other hand, triangulating 3D poses from multiple images requires camera calibration. This tedious process is not ideal for many laboratory setups, for example, a small model organism \textit{Drosophila}. We model a self-calibrating system that uses the animal as the calibration pattern. To make the system even more reliable, we use the shape and skeleton of the animal as the 3D prior, therefore fixing the inference mistakes of the deep learning based methods.Third, we propose a monocular 3D motion capture algorithm for laboratory animals. Although 3D pose information provides all the necessary information about the movement, multi-camera systems require expensive hardware and multi-view synchronization. We propose a monocular system that can still recover 3D pose without requiring multiple views. We test our model on numerous lab animals, including flies, mice, rats, and macaques. We show that our monocular motion capture system performs reliably and on-par with multi-camera motion capture systems.Lastly, we propose a novel unsupervised domain adaptation method for neural decoding of actions, which uses the previously mentioned animal markerless motion capture systems for domain adaptation. We show that our neural action decoding algorithm can successfully generalize across individuals. Furthermore, our method can provide better neural acti
Alexander Mathis, Alberto Silvio Chiappa, Alessandro Marin Vargas, Axel Bisi
Mahsa Shoaran, Uisub Shin, Gregor Rainer, Mohammad Ali Shaeri, Amitabh Yadav