Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.
Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties
Loading
Loading
Loading
Loading
Loading
Wulfram Gerstner, Jean-Pascal Théodor Pfister, Taro Toyoizumi
Hebbian'') learning rule at the spike level is formulated, mathematically analyzed, and compared with learning in a firing-rate description. As for spike coding, we take advantage of a
learning window'' that describes the effect of timing of pre- and postsynaptic spikes on synaptic weights. A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and spiking dynamics can be separated. Formation of structured synapses is analyzed for a Poissonian neuron model which receives time-dependent stochastic input. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rates. Noise generates diffusion-like spreading of synaptic weights.Johanni Michael Brea, Walter Senn