**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Plasticité synaptique

Résumé

La plasticité synaptique, en neurosciences, désigne la capacité des synapses à moduler, à la suite d'un événement particulier - une augmentation ou une diminution ponctuelle et significative de leur activité - l'efficacité de la transmission du signal électrique d'un neurone à l'autre et à conserver, à plus ou moins long terme, une "trace" de cette modulation.
De manière schématique, l'efficacité de la transmission synaptique, voire la synapse elle-même, est maintenue et modulée par l'usage qui en est fait.
La plasticité synaptique serait un des mécanismes neuronaux support de la mémoire et de l'apprentissage des organismes dotés d'un système nerveux.
La plasticité synaptique correspond à différents types de mécanismes moléculaires et cellulaires associés à des modifications de la physiologie neuronale (potentialisation et dépression à long terme) et/ou du comportement de l'organisme (habituation, sensibilisation, etc.).
En effet, la connexion entre deux n

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées (53)

Concepts associés (49)

Potentialisation à long terme

vignette|300x300px|La potentialisation à Long terme (PLT) est une augmentation persistante de la force synaptique après stimulation à haute fréquence d'une synapse chimique. Des études de la PLT sont

Récepteur NMDA

thumb|Représentation schématique d'un récepteur NMDA activé. Le glutamate et la glycine occupent leurs sites de liaison. S'il était occupé, le site allostérique causerait l'inactivation du récepteur.

Hippocampe (cerveau)

thumb|Situation de l'hippocampe en profondeur dans le cerveau humain280px|thumb|Hippocampe en vue 3D.
L'hippocampe est une structure du télencéphale des mammifères. Il appartient notamment au système

Publications associées (100)

Chargement

Chargement

Chargement

Cours associés (23)

BIOENG-450: In silico neuroscience

"In silico Neuroscience" introduces students to a synthesis of modern neuroscience and state-of-the-art data management, modelling and computing technologies.

BIO-480: Neuroscience: from molecular mechanisms to disease

The goal of the course is to guide students through the essential aspects of molecular neuroscience and neurodegenerative diseases. The student will gain the ability to dissect the molecular basis of disease in the nervous system in order to begin to understand and identify therapeutic strategies.

BIO-482: Neuroscience: cellular and circuit mechanisms

This course focuses on the cellular mechanisms of mammalian brain function. We will describe how neurons communicate through synaptic transmission in order to process sensory information ultimately leading to motor behavior.

Unités associées (21)

Séances de cours associées (44)

Our brain continuously self-organizes to construct and maintain an internal representation of the world based on the information arriving through sensory stimuli. Remarkably, cortical areas related to different sensory modalities appear to share the same functional unit, the neuron, and develop through the same learning mechanism, synaptic plasticity. It motivates the conjecture of a unifying theory to explain cortical representational learning across sensory modalities. In this thesis we present theories and computational models of learning and optimization in neural networks, postulating functional properties of synaptic plasticity that support the apparent universal learning capacity of cortical networks. In the past decades, a variety of theories and models have been proposed to describe receptive field formation in sensory areas. They include normative models such as sparse coding, and bottom-up models such as spike-timing dependent plasticity. We bring together candidate explanations by demonstrating that in fact a single principle is sufficient to explain receptive field development. First, we show that many representative models of sensory development are in fact implementing variations of a common principle: nonlinear Hebbian learning. Second, we reveal that nonlinear Hebbian learning is sufficient for receptive field formation through sensory inputs. A surprising result is that our findings are independent of specific details, and allow for robust predictions of the learned receptive fields. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. The Hebbian learning theory substantiates that synaptic plasticity can be interpreted as an optimization procedure, implementing stochastic gradient descent. In stochastic gradient descent inputs arrive sequentially, as in sensory streams. However, individual data samples have very little information about the correct learning signal, and it becomes a fundamental problem to know how many samples are required for reliable synaptic changes. Through estimation theory, we develop a novel adaptive learning rate model, that adapts the magnitude of synaptic changes based on the statistics of the learning signal, enabling an optimal use of data samples. Our model has a simple implementation and demonstrates improved learning speed, making this a promising candidate for large artificial neural network applications. The model also makes predictions on how cortical plasticity may modulate synaptic plasticity for optimal learning. The optimal sampling size for reliable learning allows us to estimate optimal learning times for a given model. We apply this theory to derive analytical bounds on times for the optimization of synaptic connections. First, we show this optimization problem to have exponentially many saddle-nodes, which lead to small gradients and slow learning. Second, we show that the number of input synapses to a neuron modulates the magnitude of the initial gradient, determining the duration of learning. Our final result reveals that the learning duration increases supra-linearly with the number of synapses, suggesting an effective limit on synaptic connections and receptive field sizes in developing neural networks.

Bruno Ricardo Da Cunha Magalhães

Simulations of the electrical activity of networks of morphologically-detailed neuron models allow for a better understanding of the brain.
Short time to solution is critical in order to study long biological processes such as synaptic plasticity and learning.
State-of-the-art approaches follow the Bulk Synchronous Parallel execution model based on the Message Passing Interface runtime system.
The efficient simulation of networks of simple neuron models is a well researched problem. However, the efficient large-scale simulation of detailed neuron models --- including topological information and a wide range of complex biological mechanisms --- remains an open problem, mostly due to the high complexity of the model, the high heterogeneity of specifications in modern compute architectures, and the limitations of existing runtime systems.
In this thesis, we explore novel methods for the asynchronous multi-scale simulation of detailed neuron models on distributed parallel compute networks.
In the first study, we introduce the concept of distributed asynchronous branch-parallelism of neurons. We present a method that extracts variable dependencies across numerical resolution of neurons topological trees and the underlying algebraic solver. We demonstrate a method for the decomposition of the neuron simulation problem into interconnected resolutions of neuron subtrees. Results demonstrate a significant reduction in runtime on heterogeneous distributed, multi-core, Single Instruction Multiple Data (SIMD) computing environments.
In the second study, we complement the previous effort with graph-parallelism retrieved from mathematical dependencies across the systems of Ordinary Differential Equations (ODEs) that model the activity of neurons. We describe a method for the automatic extraction of read-after-write variable dependencies, concurrent variable updates, and independent ODEs. The automation of the methods expose a computation graph of biological mechanisms interdependency, and an embarrassingly-parallel SIMD execution of mechanism instances, leading to an acceleration of the simulation.
The third part of this thesis pioneers the fully-asynchronous parallel execution model, and the "exhaustive yet not speculative" stepping protocol. We apply it to our use case, and demonstrate that by removing collective synchronization of neurons in time, by utilising a memory-linear neuron representation, and by advancing neurons based on their synaptic connectivity, a substantial runtime speed-up is achievable through cache efficiency.
The fourth and last part advances the aforementioned fully-asynchronous execution model with variable-order variable-timestep interpolation methods. We demonstrate low dependency of runtime and step count on volatility of solution, and a high dependency on synaptic events. We introduce a variable-step execution model that allows for non-speculative asynchronous stepping of neurons on a distributed compute network. We demonstrate higher numerical accuracy, less interpolation steps, and shorter time to solution, compared to state-of-the-art implicit fixed-step resolution.
This thesis demonstrates that utilising asynchronous runtime systems is a requirement to succeed in efficiently simulating complex neuronal activity at large scale. Most contributions presented follow from first principles in numerical methods and computer science, thus providing insights for applications on a wide range of scientific domains.

Understanding how learning and memory formation work in the brain is a major challenge in neuroscience, with important implications for many other fields, including medicine and industry. It is nowadays widely accepted that synaptic plasticity is the biological foundation of these higher order brain functions. So far, many different plastic behaviors have been intensively studied and characterized, leading to the definition of several forms of plasticity: structural, functional, homeostatic, inhibitory, and many others. Unfortunately, despite all the interest and efforts of the scientific community, a complete and consistent understanding of synaptic plasticity is still lacking.
The main goal of this study is to unify data and theories on synaptic plasticity in a comprehensive model, suitable for studying learning and memory down to the synapse level. To reach our objective, we identified a minimal set of biological mechanisms responsible for plastic dynamics and integrated them into a single synapse model, relying whenever possible on well accepted sub-models from literature. We designed a data-driven fitting and generalization strategy to parameterize all excitatory-to-excitatory synapses in a large scale reconstruction of neocortical tissue [Markram et al., 2015]. Finally, we tested the effects of functional synaptic plasticity on neural circuits in simulations.
Our model was able to capture not only the outcome of Spike Timing Dependent Plasticity (STDP) protocols used in the training phase, but also the one of all others available in the same experimental dataset [Markram et al., 1997b]. Moreover, it correctly reproduced results on distance-dependent synaptic plasticity [Sjöström and Häusser, 2006], even though this dataset was never used during fitting. In network simulations, we observed the emergence of a self-regulatory homeostatic mechanism, preventing runaway excitation. Furthermore, we noticed the strengthening of connections between neurons that are similarly innervated, as previously shown in vitro [Perin et al., 2011].