**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Asynchronous Simulation of Neuronal Activity

Abstract

Simulations of the electrical activity of networks of morphologically-detailed neuron models allow for a better understanding of the brain. Short time to solution is critical in order to study long biological processes such as synaptic plasticity and learning.

State-of-the-art approaches follow the Bulk Synchronous Parallel execution model based on the Message Passing Interface runtime system. The efficient simulation of networks of simple neuron models is a well researched problem. However, the efficient large-scale simulation of detailed neuron models --- including topological information and a wide range of complex biological mechanisms --- remains an open problem, mostly due to the high complexity of the model, the high heterogeneity of specifications in modern compute architectures, and the limitations of existing runtime systems.

In this thesis, we explore novel methods for the asynchronous multi-scale simulation of detailed neuron models on distributed parallel compute networks.

In the first study, we introduce the concept of distributed asynchronous branch-parallelism of neurons. We present a method that extracts variable dependencies across numerical resolution of neurons topological trees and the underlying algebraic solver. We demonstrate a method for the decomposition of the neuron simulation problem into interconnected resolutions of neuron subtrees. Results demonstrate a significant reduction in runtime on heterogeneous distributed, multi-core, Single Instruction Multiple Data (SIMD) computing environments.

In the second study, we complement the previous effort with graph-parallelism retrieved from mathematical dependencies across the systems of Ordinary Differential Equations (ODEs) that model the activity of neurons. We describe a method for the automatic extraction of read-after-write variable dependencies, concurrent variable updates, and independent ODEs. The automation of the methods expose a computation graph of biological mechanisms interdependency, and an embarrassingly-parallel SIMD execution of mechanism instances, leading to an acceleration of the simulation.

The third part of this thesis pioneers the fully-asynchronous parallel execution model, and the "exhaustive yet not speculative" stepping protocol. We apply it to our use case, and demonstrate that by removing collective synchronization of neurons in time, by utilising a memory-linear neuron representation, and by advancing neurons based on their synaptic connectivity, a substantial runtime speed-up is achievable through cache efficiency.

The fourth and last part advances the aforementioned fully-asynchronous execution model with variable-order variable-timestep interpolation methods. We demonstrate low dependency of runtime and step count on volatility of solution, and a high dependency on synaptic events. We introduce a variable-step execution model that allows for non-speculative asynchronous stepping of neurons on a distributed compute network. We demonstrate higher numerical accuracy, less interpolation steps, and shorter time to solution, compared to state-of-the-art implicit fixed-step resolution.

This thesis demonstrates that utilising asynchronous runtime systems is a requirement to succeed in efficiently simulating complex neuronal activity at large scale. Most contributions presented follow from first principles in numerical methods and computer science, thus providing insights for applications on a wide range of scientific domains.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (22)

Neuron

Within a nervous system, a neuron, neurone, or nerve cell is an electrically excitable cell that fires electric signals called action potentials across a neural network. Neurons communicate with othe

Time

Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component

Synaptic plasticity

In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity. Since memories are postulated to be repres

Related publications (119)

Loading

Loading

Loading

The cerebral cortex occupies nearly 80% of the entire volume of the mammalian brain and is thought to subserve higher cognitive functions like memory, attention and sensory perception. The neocortex is the newest part in the evolution of the cerebral cortex and is perhaps the most intricate brain region ever studied. The neocortical microcircuit is the smallest Œecosystem‚ of the neocortex that consists of a rich assortment of neurons, which are diverse in both their morphological and electrical properties. In the neocortical microcircuit, neurons are horizontally arranged in 6 distinct sheets called layers. The fundamental operating unit of the neocortical microcircuit is believed to be the Neocortical Column (NCC). Functionally, a single NCC is an arrangement of thousands of neurons in a vertical fashion spanning across all the 6 layers. The structure of the entire neocortex arises from a repeated and stereotypical arrangement of several thousands of such columns, where neurons transmit information to each other through specialized points of information transfer called synapses. The dynamics of synaptic transmission can be as diverse as the neurons defining a connection and are crucial to foster the functional properties of the neocortical microcircuit. The Blue Brain Project (BBP) is the first comprehensive endeavour to build a unifying model of the NCC by systematic data integration and biologically detailed simulations. Through the past 5 years, the BBP has built a facility for a data-constraint driven approach towards modelling and integrating biological information across multiple levels of complexity. Guided by fundamental principles derived from biological experiments, the BBP simulation toolchain has undergone a process of continuous refinement to facilitate the frequent construction of detailed in silico models of the NCC. The focus of this thesis lies in characterizing the functional properties of in silico synaptic transmission by incorporating principles of synaptic communication derived through biological experiments. In order to study in silico synaptic transmission it is crucial to gain an understanding of the key players influencing the manner in which synaptic signals are processed in the neocortical microcircuit - ion channel kinetics and distribution profiles, single neuron models and dynamics of synaptic pathways. First, by means of exhaustive literature survey, I identified ion channel kinetics and their distribution profiles on neocortical neurons to build in silico ion channel models. Thereafter, I developed a prototype framework to analyze the somatic and dendritic features of single neuron models constrained by ion channel kinetics. Finally, within a simulation framework integrating the ion channels, single neuron models and dynamics of synaptic transmission, I replicated in vitro experimental protocols in silico, to characterize the transmission properties of monosynaptic connections. These synaptic connections, arising from the axo-dendritric apposition of neuronal arbours were sampled across many instances of in silico NCC models constructed a priori through the BBP simulation toolchain. In this thesis, I show that when principles of synaptic transmission derived from in vitro experiments are incorporated to model in silico synaptic connections, the resulting anatomy and physiology of synaptic connections modelled from elementary biological rules closely match in vitro data. This thesis work demonstrates that the average synaptic response properties in silico are robust to perturbations in the anatomical and physiological properties of modelled connections in the local neocortical microcircuit. A fundamental discovery through this thesis is an insight into the function of the local neocortical microcircuit by examining the effect of morphological diversity on in silico synaptic transmission. I demonstrate here that intrinsic morphological diversity confers an invariance to the average synaptic response properties in silico in the local neocortical microcircuit, termed "microcircuit level robustness and invariance".

Sean Lewis Hill, Michael Lee Hines, Eilif Benjamin Muller

The growing number of large-scale neuronal network models has created a need for standards and guidelines to ease model sharing and facilitate the replication of results across different simulators. To foster community efforts towards such standards, the International Neuroinformatics Coordinating Facility (INCF) has formed its Multiscale Modeling program, and has assembled a task force of simulator developers to propose a declarative computer language for descriptions of large-scale neuronal networks. The name of the proposed language is "Network Interchange for Neuroscience Modeling Language" (NineML) and its initial focus is restricted to point neuron models. The INCF Multiscale Modeling task force has identified the key concepts of network modeling to be 1) spiking neurons 2) synapses 3) populations of neurons and 4) connectivity patterns across populations of neurons. Accordingly, the definition of NineML includes a set of mathematical abstractions to represent these concepts. NineML aims to provide tool support for explicit declarative definition of spiking neuronal network models both conceptually and mathematically in a simulator independent manner. In addition, NineML is designed to be self-consistent and highly flexible, allowing addition of new models and mathematical descriptions without modification of the previous structure and organization of the language. To achieve these goals, the language is being iteratively designed using several representative models with various levels of complexity as test cases. The design of NineML is divided in two semantic layers: the Abstraction Layer, which consists of core mathematical concepts necessary to express neuronal and synaptic dynamics and network connectivity patterns, and the User Layer, which provides constructs to specify the instantiation of a network model in terms that are familiar to computational neuroscience modelers. As part of the Abstraction Layer, NineML includes a flexible block diagram notation for describing spiking dynamics. The notation represents continuous and discrete variables, their evolution according to a set of rules such as a system of ordinary differential equations, and the conditions that induce a regime change, such as the transition from subthreshold mode to spiking and refractory modes. The User Layer provides syntax for specifying the structure of the elements of a spiking neuronal network. This includes parameters for each of the individual elements (cells, synapses, inputs) and the grouping of these entities into networks. In addition, the user layer defines the syntax for supplying parameter values to abstract connectivity patterns. The NineML specification is defined as an implementation-neutral object model representing all the concepts in the User and Abstraction Layers. Libraries for creating, manipulating, querying and serializing the NineML object model to a standard XML representation will be delivered for a variety of languages. The first priority of the task force is to deliver a publicly available Python implementation to support the wide range of simulators which provide a Python user interface (NEURON, NEST, Brian, MOOSE, GENESIS-3, PCSIM, PyNN, etc.). These libraries will allow simulator developers to quickly add support for NineML, and will thus catalyze the emergence of a broad software ecosystem supporting model definition interoperability around NineML.

2011Simulations of electrical activity of networks of morphologically detailed neuron models allow for a better understanding of the brain. State-of-the-art simulations describe the dynamics of ionic currents and biochemical processes within branching topological representations of the neurons. Acceleration of such simulation is possible in the weak scaling limit by modeling neurons as indivisible computation units and increasing the computing power. Strong scaling and simulations close to biological time are difficult, yet required, for the study of synaptic plasticity and other use cases requiring simulation of neurons for long periods of time. Current methods rely on parallel Gaussian Elimination, computing triangulation and substitution of many branches simultaneously. Existing limitations are: (a) high heterogeneity of compute time per neuron leads to high computational load imbalance; and (b) difficulty in providing a computation model that fully utilizes the computing resources on distributed multi-core architectures with Single Instruction Multiple Data (SIMD) capabilities. To address these issues, we present a strategy that extracts flow-dependencies between parameters of the ODEs and the algebraic solver of individual neurons. Based on the resulting map of dependencies, we provide three techniques for memory, communication, and computation reorganization that yield a bad-balanced distributed asynchronous execution. The new computation model distributes datasets and balances computational workload across a distributed memory space, exposing a tree-based parallelism of neuron topological structure, an embarrassingly parallel execution model of neuron subtrees, and a SIMD acceleration of subtree state updates. The capabilities of our methods are demonstrated on a prototype implementation developed on the core compute kernel of the NEURON scientific application, built on the HPX runtime system for the ParalleX execution model. Our implementation yields an asynchronous distributed and parallel simulation that accelerates single neuron to medium-sized neural networks. Benchmark results display better strong scaling properties, finer-grained parallelism, and lower time to solution compared to the state of the art, on a wide range of distributed multi-core compute architectures.