**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs

Abstract

Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PPGLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (74)

Loading

Loading

Loading

Related concepts (24)

Neural oscillation

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either

Biological neuron model

Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials acr

Artificial neural network

Artificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered

Wulfram Gerstner, Skander Mensi, Richard Naud

The ability of simple mathematical models to predict the activity of single neurons is important for computational neuroscience. In neurons, stimulated by a time-dependent current or conductance, we want to predict precisely the timing of spikes and the sub-threshold voltage. During the last years several models have been tested on this type of data but never compared with the same protocol. One of the major outcome is that, from a certain degree of complexity, all are very efficient and gave statistically indistinguishable results. We studied a class of integrate-and-fire models (IF), with each member of the class implementing a selection of possible improvements: exponential voltage non-linearity1, spike-triggered adaptation current2, spike-triggered change in conductance, moving threshold3, sub-threshold voltage-dependent currents4. Each refinement adds a new term to the equations of the IF model. This IF family is extendable and adaptable to different neuron types and is able to deal with complex neural activities (i.e. adaptation, facilitation, bursting, relative refractoriness, ...). To systematically explore the effects of a given term of the model a new fitting procedure based on linear regression of voltage change5 is used. This method is fast, robust and allows the extraction of all the models parameters from a few seconds recordings of fluctuating injected current and membrane potential with hundreds spikes without any prior knowledge. To investigate the effect of our modifications, we used as training data three different Hodgkin-Huxley-like models, and two experimental recordings of fast spiking and regular spiking cells and then evaluate each IF model on a given test set. We observe that it is possible to fit a model that can reproduce the activity of neurons with high reliability (i.e. almost 100 % of the spike time and less than 1 mV of sub-threshold voltage difference). Using this framework one can classify IF models in terms of complexity and performance and evaluate the importance of each term for different stimulation paradigms.

2010Computational neuroscience is a branch of the neurosciences that attempts to elucidate the principles underlying the operation of neurons with the help of mathematical modeling. In contrast with a number of fields pursuing a tightly related goal, such as machine learning or statistical learning, computational neuroscience emphasizes the realistic description of neuronal function and organization, and therefore requires a collaboration with experimental research in order to design new models and test their predictions. With the growing computational power available in modern computers, the simulation of neuronal networks with realistic numbers of neurons and functional organization is becoming routinely accessible. For the accurate design of such network models, the experimental characterization of neuronal response properties is required to probe the structural layout of biological networks at the single cell level. We develop here a novel method, the dynamic I-V method, that allows for the extraction of cellular response properties, and requires only a small amount of experimental data. Using this technique a simplified model of the neuronal response is obtained that is shown to accurately fit the experimental data. We demonstrate the use of the dynamic I-V method on both cortical pyramidal cells and inhibitory interneurons, and give the distributions of cellular parameters for the studied sample of cells. We also give theoretical results on the spike-triggered average, a quantity that is easily measured experimentally, which relate the neuronal response properties to identifiable features of the spike-triggered average, and can potentially be helpful to extend the dynamic I-V method in order to derive more accurate models. Potential applications and extensions of our results are discussed.

Predicting activity of single neuron is an important part of the computational neuroscience and a great challenge. Several mathematical models exist, from the simple (one compartment and few parameters, like the SRM or the IF-type models), to the more complex (biophysical model like the Hodgkin-Huxley model). All these models have their own advantages and limitations, but no one is able to reproduce the exact behavior of real neurons. Multiple projects try to simulate complex neural networks, or even the whole brain (i.e. the blue brain project or other big network simulation). To achieve this goal it is very important that simple neuron models simulate, for a low computational cost, the precise activities of all neuron classes. It is well-known that neurons exhibit a lot of different activity patterns (from an electro-physiological point of view). Here we have focused on pyramidal neurons from the layer 5 of the neocortex. They are classified as regular spiking-cells. This cell type shows adaptation and like other neurons, refractoriness. Adaptation and refractoriness are very common neuronal activity in the brain, and so it is important to have a simple model which can reproduce this kind of activity. This project deals with two classical simple neuron models: the adaptive exponential integrate-and-fire model (AdEx) and the spike response model (SRM). We deter- mined the parameters of these two models using data generated with a detailed model, the Destexhe's model which is a HH-like model for cortical pyramidal cells, stimulated with different current injection scenarios. In a second time the model parameters have been set using data from in-vitro recordings of 4 layer 5 pyramidal rat neurons, stimulated with a sinusoid in vivo-like protocol, injected somatically in current-clamp configuration. Then we show that this type of model can capture adaptation and can reproduce the activity of neuron with a high reliability

2008