**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Connection-type-specific biases make uniform random network models consistent with cortical recordings

Michael Avermann, Wulfram Gerstner, Carl Petersen, Christian Tomm, Tim Vogels

*American Physiological Society, *2014

Journal paper

Journal paper

Abstract

Uniform random sparse network architectures are ubiquitous in computational neuroscience, but the implicit hypothesis that they are a good representation of real neuronal networks has been met with skepticism. Here we used two experimental data sets, a study of triplet connectivity statistics and a data set measuring neuronal responses to channelrhodopsin stimuli, to evaluate the fidelity of thousands of model networks. Network architectures comprised three neuron types (excitatory, fast spiking, and nonfast spiking inhibitory) and were created from a set of rules that govern the statistics of the resulting connection types. In a high-dimensional parameter scan, we varied the degree distributions (i.e., how many cells each neuron connects with) and the synaptic weight correlations of synapses from or onto the same neuron. These variations converted initially uniform random and homogeneously connected networks, in which every neuron sent and received equal numbers of synapses with equal synaptic strength distributions, to highly heterogeneous networks in which the number of synapses per neuron, as well as average synaptic strength of synapses from or to a neuron were variable. By evaluating the impact of each variable on the network structure and dynamics, and their similarity to the experimental data, we could falsify the uniform random sparse connectivity hypothesis for 7 of 36 connectivity parameters, but we also confirmed the hypothesis in 8 cases. Twenty-one parameters had no substantial impact on the results of the test protocols we used.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (43)

Loading

Loading

Loading

Related concepts (16)

Neuron

Within a nervous system, a neuron, neurone, or nerve cell is an electrically excitable cell that fires electric signals called action potentials across a neural network. Neurons communicate with othe

Synaptic plasticity

In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity. Since memories are postulated to be repres

Computational neuroscience

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, computer simulations, theoreti

Moritz Deger, Wulfram Gerstner, Hesam Setareh

Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly.

The brain is a complex biological system composed of a multitude of microscopic processes, which together give rise to computational abilities observed in everyday behavior. Neuronal modeling, consisting of models of single neurons and neuronal networks at varying levels of biological detail, can synthesize the gaps currently hard to constrain in experiments and provide mechanistic explanations of how these computations might arise. In this thesis, I present two parallel lines of research on neuronal modeling, situated at varying levels of biological detail. First, I assess the provenance of voltage-gated ion channel models in an integrative meta-analysis that investigates a backlog of nearly 50 years of published research. To cope with the ever-increasing volume of research produced in the field of neuroscience, we need to develop methods for the systematic assessment and comparison of published work. As we demonstrate, neuronal models offer the intriguing possibility of performing automated quantitative analyses across studies, by standardized simulated experiments. We developed protocols for the quantitative comparison of voltage-gated ion channels, and applied them to a large body of published models, allowing us to assess the variety and temporal development of different models for the same ion channels over the time scale of years of research. Beyond a systematic classification of the existing body of research made available in an online platform, we show that our approach extends to large-scale comparisons of ion channel models to experimental data, thereby facilitating field-wide standardization of experimentally-constrained modeling. Second, I investigate neuronal models of working memory (WM). How can cortical networks bridge the short time scales of their microscopic components, which operate on the order of milliseconds, to the behaviorally relevant time scales of seconds observed in WM experiments? I consider here a candidate model: continuous attractor networks. These can implement WM for a continuum of possible spatial locations over several seconds and have been proposed for the organization of prefrontal cortical networks. I first present a novel method for the efficient prediction of the network-wide steady states from the underlying microscopic network properties. The method can be applied to predict and tune the "bump" shapes of continuous attractors implemented in networks of spiking neuron models connected by nonlinear synapses, which we demonstrate for saturating synapses involving NMDA receptors. In a second part, I investigate the computational role of short-term synaptic plasticity as a synaptic nonlinearity. Continuous attractor models are sensitive to the inevitable variability of biological neurons: variable neuronal firing and heterogeneous networks decrease the time that memories are accurately retained, eventually leading to a loss of memory functionality on behaviorally relevant time scales. In theory and simulations, I show that short-term plasticity can control the time scale of memory retention, with facilitation and depression playing antagonistic roles in controlling the drift and diffusion of locations in memory. Finally, we place quantitative constraints on the combination of synaptic and network parameters under which continuous attractors networks can implement reliable WM in cortical settings.

Sean Lewis Hill, Michael Lee Hines, Eilif Benjamin Muller

The growing number of large-scale neuronal network models has created a need for standards and guidelines to ease model sharing and facilitate the replication of results across different simulators. To foster community efforts towards such standards, the International Neuroinformatics Coordinating Facility (INCF) has formed its Multiscale Modeling program, and has assembled a task force of simulator developers to propose a declarative computer language for descriptions of large-scale neuronal networks. The name of the proposed language is "Network Interchange for Neuroscience Modeling Language" (NineML) and its initial focus is restricted to point neuron models. The INCF Multiscale Modeling task force has identified the key concepts of network modeling to be 1) spiking neurons 2) synapses 3) populations of neurons and 4) connectivity patterns across populations of neurons. Accordingly, the definition of NineML includes a set of mathematical abstractions to represent these concepts. NineML aims to provide tool support for explicit declarative definition of spiking neuronal network models both conceptually and mathematically in a simulator independent manner. In addition, NineML is designed to be self-consistent and highly flexible, allowing addition of new models and mathematical descriptions without modification of the previous structure and organization of the language. To achieve these goals, the language is being iteratively designed using several representative models with various levels of complexity as test cases. The design of NineML is divided in two semantic layers: the Abstraction Layer, which consists of core mathematical concepts necessary to express neuronal and synaptic dynamics and network connectivity patterns, and the User Layer, which provides constructs to specify the instantiation of a network model in terms that are familiar to computational neuroscience modelers. As part of the Abstraction Layer, NineML includes a flexible block diagram notation for describing spiking dynamics. The notation represents continuous and discrete variables, their evolution according to a set of rules such as a system of ordinary differential equations, and the conditions that induce a regime change, such as the transition from subthreshold mode to spiking and refractory modes. The User Layer provides syntax for specifying the structure of the elements of a spiking neuronal network. This includes parameters for each of the individual elements (cells, synapses, inputs) and the grouping of these entities into networks. In addition, the user layer defines the syntax for supplying parameter values to abstract connectivity patterns. The NineML specification is defined as an implementation-neutral object model representing all the concepts in the User and Abstraction Layers. Libraries for creating, manipulating, querying and serializing the NineML object model to a standard XML representation will be delivered for a variety of languages. The first priority of the task force is to deliver a publicly available Python implementation to support the wide range of simulators which provide a Python user interface (NEURON, NEST, Brian, MOOSE, GENESIS-3, PCSIM, PyNN, etc.). These libraries will allow simulator developers to quickly add support for NineML, and will thus catalyze the emergence of a broad software ecosystem supporting model definition interoperability around NineML.

2011