Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In the present work, we approach two key aspects of memory formation: associative memory and synaptic consolidation. The storage of associative memory is commonly related to the medial temporal lobe in humans. Experimental evidence shows that the memories of objects, people or places are represented in this brain area by cell assemblies that respond selectively to single concepts. Neurons forming such assemblies are called concept cells. Associations between different concepts are linked to concept cells shared between assemblies: we refer to the number of shared neurons as the overlap between memory engrams. The respective assemblies of two associated concepts (e.g. Hillary and Bill Clinton) share more neurons compared to the assemblies of two unrelated concepts (e.g. Hillary Clinton and the Eiffel tower). In particular, three characteristics of assemblies of concept cells are important for this work: (a) they exhibit a very low mean activity (about 0.2% of neurons respond to each concept), (b) overlapping assemblies share about 4% of their cells, (c) non-overlapping assemblies share less then 1% of their cells. This implies that the association between two concepts induces a higher level of overlap between the relative memory engrams. In parallel, theoretical studies have shown that overlaps between memory engrams are fundamental in the process of free recall of sequences of words. These models assume that memory engrams have such a high mean activity that all assemblies are overlapping to a certain extent. Associative memory is traditionally modeled through attractor neural networks. Memory engrams are represented by binary patterns of active/silent neurons. While there is extensive literature on independent low-activity patterns, only a few studies can be found on correlated patterns. Extending the existing theory to include correlation is a key missing point to answer questions such as: How do shared neurons encode association? Why are 4% of neurons shared and not more? Using a mean-field approximation, we derive analytic equations for the network dynamics in the case of correlated patterns. Our results provide a theoretical framework that can explain the experimentally observed value of shared neurons. We find that for concepts represented by realistically sparse neural assemblies there are a minimal and a maximal fraction of shared neurons so that associations can be reliably coded. In the presence of a periodically modulated signal, such as hippocampal oscillations, chains of associations can be recalled analogously to theories of free recall of lists of memorized words. Finally, we compare the predicted number of concepts a neuron responds to with experimental data. We test different ways of constructing correlated patterns and confirmed the common opinion that information in the hippocampus is non-hierarchically organised. In the second part of the thesis, we propose a model of synaptic consolidation based on two coupled dynamical variables: the fast synaptic weights and a slow internal synaptic mechanism. In classical experiments, the consolidation of the synapse is related to the stimulation frequency and number of repetitions. We show that it is exactly the time scale separation between the dynamics of the two variables that determines which combination of stimulation amplitude and frequency are suitable to elicit synaptic consolidation.
Henry Markram, Rodrigo de Campos Perin