Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
How does reliable computation emerge from networks of noisy neurons? While individual neurons are intrinsically noisy, the collective dynamics of populations of neurons taken as a whole can be almost deterministic, supporting the hypothesis that, in the brain, computation takes place at the level of neuronal populations. Mathematical models of networks of noisy spiking neurons allow us to study the effects of neuronal noise on the dynamics of large networks. Classical mean-field models, i.e., models where all neurons are identical and where each neuron receives the average spike activity of the other neurons, offer toy examples where neuronal noise is absorbed in large networks, that is, large networks behave like deterministic systems. In particular, the dynamics of these large networks can be described by deterministic neuronal population equations. In this thesis, I first generalize classical mean-field limit proofs to a broad class of spiking neuron models that can exhibit spike-frequency adaptation and short-term synaptic plasticity, in addition to refractoriness. The mean-field limit can be exactly described by a multidimensional partial differential equation; the long time behavior of which can be rigorously studied using deterministic methods. Then, we show that there is a conceptual link between mean-field models for networks of spiking neurons and latent variable models used for the analysis of multi-neuronal recordings. More specifically, we use a recently proposed finite-size neuronal population equation, which we first mathematically clarify, to design a tractable Expectation-Maximization-type algorithm capable of inferring the latent population activities of multi-population spiking neural networks from the spike activity of a few visible neurons only, illustrating the idea that latent variable models can be seen as partially observed mean-field models.In classical mean-field models, neurons in large networks behave like independent, identically distributed processes driven by the average population activity -- a deterministic quantity, by the law of large numbers. The fact the neurons are identically distributed processes implies a form of redundancy that has not been observed in the cortex and which seems biologically implausible. To show, numerically, that the redundancy present in classical mean-field models is unnecessary for neuronal noise absorption in large networks, I construct a disordered network model where networks of spiking neurons behave like deterministic rate networks, despite the absence of redundancy. This last result suggests that the concentration of measure phenomenon, which generalizes the ``law of large numbers'' of classical mean-field models, might be an instrumental principle for understanding the emergence of noise-robust population dynamics in large networks of noisy neurons.
Michael Reimann, András Ecker, Sirio Bolaños Puchet, James Bryden Isbister, Daniela Egas Santander
Olaf Blanke, Fosco Bernasconi, Nathan Quentin Faivre, Michael Eric Anthony Pereira, Shuo Wang
Tilo Schwalger, Valentin Marc Schmutz, Eva Löcherbach