This article analyzes the simple Rescorla-Wagner learning rule from the vantage point of least squares learning theory. In particular, it suggests how measures of risk, such as prediction risk, can be used to adjust the learning constant in reinforcement learning. It argues that prediction risk is most effectively incorporated by scaling the prediction errors. This way, the learning rate needs adjusting only when the covariance between optimal predictions and past (scaled) prediction errors changes. Evidence is discussed that suggests that the dopaminergic system in the (human and nonhuman) primate brain encodes prediction risk, and that prediction errors are indeed scaled with prediction risk (adaptive encoding). © 2007 New York Academy of Sciences.
Friedhelm Christoph Hummel, Claudia Bigoni, Nima Taherinejad
Carl Petersen, Sylvain Crochet, Yanqi Liu, Parviz Ghaderi, Mauro Pulin, Anthony Pierre Robert Renard, Christos Sourmpis, Pol Bech Vilaseca, Meriam Malekzadeh, Robin François Virginien Dard