Decoding the formation of reward predictions across learning.
Thorsten Kahnt, Jakob Heinzle, Soyoung Q Park, and John-Dylan Haynes
Journal of Neuroscience
2011 vol. 31 (41) pp. 14624-14630
The predicted reward of different behavioral options plays an important role in guiding decisions. Previous research has identified reward predictions in prefrontal and striatal brain regions. Moreover, it has been shown that the neural representation of a predicted reward is similar to the neural representation of the actual reward outcome. However, it has remained unknown how these representations emerge over the course of learning and how they relate to decision making. Here, we sought to investigate learning of predicted reward representations using functional magnetic resonance imaging and multivariate pattern classification. Using a pavlovian conditioning procedure, human subjects learned multiple novel cue-outcome associations in each scanning run. We demonstrate that across learning activity patterns in the orbitofrontal cortex, the dorsolateral prefrontal cortex (DLPFC), and the dorsal striatum, coding the value of predicted rewards become similar to the patterns coding the value of actual reward outcomes. Furthermore, we provide evidence that predicted reward representations in the striatum precede those in prefrontal regions and that representations in the DLPFC are linked to subsequent value-based choices. Our results show that different brain regions represent outcome predictions by eliciting the neural representation of the actual outcome. Furthermore, they suggest that reward predictions in the DLPFC are directly related to value-based choices.