Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic hypothesis of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.|Experimental evidence suggests that familiar items are represented by larger hippocampal neuronal assemblies than less familiar ones. In line with this finding, our computational model shows that the size of memory assemblies depends on the frequency of their recall (i.e. the higher the frequency, the larger the assembly), which can be explained by the interplay of online learning and background firing activity. Furthermore, we find that assemblies representing uncorrelated memories increase their sizes while remaining orthogonal, in line with findings with single-cell recordings. To model this empirical finding, we propose to go beyond the standard attractor network memory models and use instead a dynamic model to study memory coding.
Wulfram Gerstner, Johanni Michael Brea
Henry Markram, Rodrigo de Campos Perin