Prefrontal Neurons Represent Winning and Losing during Competitive Video Shooting Games between Monkeys
Takayuki Hosokawa and Masataka Watanabe
J. Neurosci. 2012;32 7662-7671
Humans and animals must work to support their survival and reproductive needs. Because resources are limited in the natural environment, competition is inevitable, and competing successfully is vitally important. However, the neuronal mechanisms of competitive behavior are poorly studied. We examined whether neurons in the lateral prefrontal cortex (LPFC) showed response sensitivity related to a competitive game. In this study, monkeys played a video shooting game, either competing with another monkey or the computer, or playing alone without a rival. Monkeys performed more quickly and more accurately in the competitive than in the noncompetitive games, indicating that they were more motivated in the competitive than in the noncompetitive games. LPFC neurons showed differential activity between the competitive and noncompetitive games showing winning- and losing-related activity. Furthermore, activities of prefrontal neurons differed depending on whether the competition was between monkeys or between the monkey and the computer. These results indicate that LPFC neurons may play an important role in monitoring the outcome of competition and enabling animals to adapt their behavior to increase their chances of obtaining a reward in a socially interactive environment.
Jan R. Wessel, Claudia Danielmeier, J. Bruce Morton, and Markus Ullsperger
J. Neurosci. 2012;32 7528-7537
According to recent accounts, the processing of errors and generally infrequent, surprising (novel) events share a common neuroanatomical substrate. Direct empirical evidence for this common processing network in humans is, however, scarce. To test this hypothesis, we administered a hybrid error-monitoring/novelty-oddball task in which the frequency of novel, surprising trials was dynamically matched to the frequency of errors. Using scalp electroencephalographic recordings and event-related functional magnetic resonance imaging (fMRI), we compared neural responses to errors with neural responses to novel events. In Experiment 1, independent component analysis of scalp ERP data revealed a common neural generator implicated in the generation of both the error-related negativity (ERN) and the novelty-related frontocentral N2. In Experiment 2, this pattern was confirmed by a conjunction analysis of event-related fMRI, which showed significantly elevated BOLD activity following both types of trials in the posterior medial frontal cortex, including the anterior midcingulate cortex (aMCC), the neuronal generator of the ERN. Together, these findings provide direct evidence of a common neural system underlying the processing of errors and novel events. This appears to be at odds with prominent theories of the ERN and aMCC. In particular, the reinforcement learning theory of the ERN may need to be modified because it may not suffice as a fully integrative model of aMCC function. Whenever course and outcome of an action violates expectancies (not necessarily related to reward), the aMCC seems to be engaged in evaluating the necessity of behavioral adaptation.
Adam Waytz, Jamil Zaki, and Jason P. Mitchell
J. Neurosci. 2012;32 7646-7650
Human beings have an unusual proclivity for altruistic behavior, and recent commentators have suggested that these prosocial tendencies arise from our unique capacity to understand the minds of others (i.e., to mentalize). The current studies test this hypothesis by examining the relation between altruistic behavior and the reflexive engagement of a neural system reliably associated with mentalizing. Results indicated that activity in the dorsomedial prefrontal cortex—a region consistently involved in understanding others' mental states—predicts both monetary donations to others and time spent helping others. These findings address long-standing questions about the proximate source of human altruism by suggesting that prosocial behavior results, in part, from our broader tendency for social-cognitive thought.
Distinct contributions of the amygdala and parahippocampal gyrus to suspicion in a repeated bargaining game
Meghana A. Bhatt, Terry Lohrenz, Colin F. Camerer, and P. Read Montague
PNAS May 29, 2012 vol. 109 no. 22 8728-8733
Humans assess the credibility of information gained from others on a daily basis; this ongoing assessment is especially crucial for avoiding exploitation by others. We used a repeated, two-person bargaining game and a cognitive hierarchy model to test how subjects judge the information sent asymmetrically from one player to the other. The weight that they give to this information is the result of two distinct factors: their baseline suspicion given the situation and the suspicion generated by the other person’s behavior. We hypothesized that human brains maintain an ongoing estimate of the credibility of the other player and sought to uncover neural correlates of this process. In the game, sellers were forced to infer the value of an object based on signals sent from a prospective buyer. We found that amygdala activity correlated with baseline suspicion, whereas activations in bilateral parahippocampus correlated with trial-by-trial uncertainty induced by the buyer’s sequence of suggestions. In addition, the less credible buyers that appeared, the more sensitive parahippocampal activation was to trial-by-trial uncertainty. Although both of these neural structures have previously been implicated in trustworthiness judgments, these results suggest that they have distinct and separable roles that correspond to their theorized roles in learning and memory.
Aldo Genovesio, Satoshi Tsujimoto, Steven P. Wise
Neuron, Volume 74, Issue 4, 656-662, 24 May 2012
Functional neuroimaging studies show that perceptual judgments about time and space activate similar prefrontal and parietal areas, and it is known that perceptions in these two cognitive domains interfere with each other. These findings have led to the theory that temporal and spatial perceptions, among other metrics, draw on a common representation of magnitude. Our results indicate that an alternative principle applies to the prefrontal cortex. Analysis at the single-cell level shows that separate, domain-specific populations of neurons encode relative magnitude in time and space. These neurons are intermixed with each other in the prefrontal cortex, along with a separate intermixed population that encodes the goal chosen on the basis of these perceptual decisions. As a result, domain-specific neural processing at the single-cell level seems to underlie domain generality as observed at the regional level, with a common representation of prospective goals rather than a common representation of magnitude.
Mael Lebreton, Shadia Kawa, Baudouin Forgeot d'Arc, Jean Daunizeau, and Mathias Pessiglione
J. Neurosci. 2012;32 7146-7157
The spread of desires among individuals is widely believed to shape motivational drives in human populations. However, objective evidence for this phenomenon and insights into the underlying brain mechanisms are still lacking. Here we show that participants rated objects as more desirable once perceived as the goals of another agent's action. We then unravel the mechanisms underpinning such goal contagion, using functional neuroimaging. As expected, observing goal-directed actions activated a parietofrontal network known as the mirror neuron system (MNS), whereas subjective desirability ratings were represented in a ventral striatoprefrontal network known as the brain valuation system (BVS). Crucially, the induction of mimetic desires through action observation involved the modulation of BVS activity through MNS activity. Furthermore, MNS–BVS effective connectivity predicted individual susceptibility toward mimetic desires. We therefore suggest that MNS–BVS interaction represents a fundamental mechanism explaining how nonverbal behavior propagates desires without the need for explicit, intentional communication.
Shaun R. Patel, Sameer A. Sheth, Matthew K. Mian, John T. Gale, Benjamin D. Greenberg, Darin D. Dougherty, and Emad N. Eskandar
J. Neurosci. 2012;32 7311-7315
「ヒト」電気生理。側坐核（腹側線条体）の神経活動から被験者の意思決定が予測できる（また、同時に、報酬予測誤差をコードしていることも確かめられた ← fMRIの結果と一致）。
Linking values to actions and evaluating expectations relative to outcomes are both central to reinforcement learning and are thought to underlie financial decision-making. However, neurophysiology studies of these processes in humans remain limited. Here, we recorded the activity of single human nucleus accumbens neurons while subjects performed a gambling task. We show that the nucleus accumbens encodes two signals related to subject behavior. First, we find that under relatively predictable conditions, single neuronal activity predicts future financial decisions on a trial-by-trial basis. Interestingly, we show that this activity continues to predict decisions even under conditions of uncertainty (e.g., when the probability of winning or losing is 50/50 and no particular financial choice predicts a rewarding outcome). Furthermore, we find that this activity occurs, on average, 2 s before the subjects physically manifest their decision. Second, we find that the nucleus accumbens encodes the difference between expected and realized outcomes, consistent with a prediction error signal. We show this activity occurs immediately after the subject has realized the outcome of the trial and is present on both the individual and population neuron levels. These results provide human single neuronal evidence that the nucleus accumbens is integral in making financial decisions.
Diana I. Tamir and Jason P. Mitchell
PNAS May 22, 2012 vol. 109 no. 21 8038-8043
Humans devote 30–40% of speech output solely to informing others of their own subjective experiences. What drives this propensity for disclosure? Here, we test recent theories that individuals place high subjective value on opportunities to communicate their thoughts and feelings to others and that doing so engages neural and cognitive mechanisms associated with reward. Five studies provided support for this hypothesis. Self-disclosure was strongly associated with increased activation in brain regions that form the mesolimbic dopamine system, including the nucleus accumbens and ventral tegmental area. Moreover, individuals were willing to forgo money to disclose about the self. Two additional studies demonstrated that these effects stemmed from the independent value that individuals placed on self-referential thought and on simply sharing information with others. Together, these findings suggest that the human tendency to convey information about personal experience may arise from the intrinsic value associated with self-disclosure.
Nicholas D. Wright, Mkael Symmonds, Karen Hodgson, Thomas H. B. Fitzgerald,
Bonni Crawford, and Raymond J. Dolan
J. Neurosci. 2012;32 7009-7020
Value-based choices are influenced both by risk in potential outcomes and by whether outcomes reflect potential gains or losses. These variables are held to be related in a specific fashion, manifest in risk aversion for gains and risk seeking for losses. Instead, we hypothesized that there are independent impacts of risk and loss on choice such that, depending on context, subjects can show either risk aversion for gains and risk seeking for losses or the exact opposite. We demonstrate this independence in a gambling task, by selectively reversing a loss-induced effect (causing more gambling for gains than losses and the reverse) while leaving risk aversion unaffected. Consistent with these dissociable behavioral impacts of risk and loss, fMRI data revealed dissociable neural correlates of these variables, with parietal cortex tracking risk and orbitofrontal cortex and striatum tracking loss. Based on our neural data, we hypothesized that risk and loss influence action selection through approach–avoidance mechanisms, a hypothesis supported in an experiment in which we show valence and risk-dependent reaction time effects in line with this putative mechanism. We suggest that in the choice process risk and loss can independently engage approach–avoidance mechanisms. This can provide a novel explanation for how risk influences action selection and explains both classically described choice behavior as well as behavioral patterns not predicted by existing theory.
Sleep Deprivation Is Associated with Attenuated Parametric Valuation and Control Signals in the Midbrain during Value-Based Decision Making
Mareike M. Menz, Christian Buchel, and Jan Peters
J. Neurosci. 2012;32 6937-6946
Sleep deprivation (SD) has detrimental effects on cognition, but the affected psychological processes and underlying neural mechanisms are still essentially unclear. Here we combined functional magnetic resonance imaging and computational modeling to examine how SD alters neural representation of specific choice variables (subjective value and decision conflict) during reward-related decision making. Twenty-two human subjects underwent two functional neuroimaging sessions in counterbalanced order, once during rested wakefulness and once after 24 h of SD. Behaviorally, SD attenuated conflict-dependent slowing of response times, which was reflected in an attenuated conflict-induced decrease in drift rates in the drift diffusion model. Furthermore, SD increased overall choice stochasticity during risky choice. Model-based functional neuroimaging revealed attenuated parametric subjective value signals in the midbrain, parietal cortex, and ventromedial prefrontal cortex after SD. Conflict-related midbrain signals showed a similar downregulation. Findings are discussed with respect to changes in dopaminergic signaling associated with the sleep-deprived state.
Andrew M. Clark, Sebastien Bouret, Adrienne M. Young, and Barry J. Richmond
J. Neurosci. 2012;32 6869-6877
In humans and other animals, the vigor with which a reward is pursued depends on its desirability, that is, on the reward's predicted value. Predicted value is generally context-dependent, varying according to the value of rewards obtained in the recent and distant past. Signals related to reward prediction and valuation are believed to be encoded in a circuit centered around midbrain dopamine neurons and their targets in the prefrontal cortex and basal ganglia. Notably absent from this hypothesized reward pathway are dopaminergic targets in the medial temporal lobe. Here we show that a key part of the medial temporal lobe memory system previously reported to be important for sensory mnemonic and perceptual processing, the rhinal cortex (Rh), is required for using memories of previous reward values to predict the value of forthcoming rewards. We tested monkeys with bilateral Rh lesions on a task in which reward size varied across blocks of uncued trials. In this experiment, the only cues for predicting current reward value are the sizes of rewards delivered in previous blocks. Unexpectedly, monkeys with Rh ablations, but not intact controls, were insensitive to differences in predicted reward, responding as if they expected all rewards to be of equal magnitude. Thus, it appears that Rh is critical for using memory of previous rewards to predict the value of forthcoming rewards. These results are in agreement with accumulating evidence that Rh is critical for establishing the relationships between temporally interleaved events, which is a key element of episodic memory.
Scott Gorlin, Ming Meng, Jitendra Sharma, Hiroki Sugihara, Mriganka Sur, and Pawan Sinha
PNAS May 15, 2012 vol. 109 no. 20 7935-7940
In making sense of the visual world, the brain's processing is driven by two factors: the physical information provided by the eyes (“bottom-up” data) and the expectancies driven by past experience (“top-down” influences). We use degraded stimuli to tease apart the effects of bottom-up and top-down processes because they are easier to recognize with prior knowledge of undegraded images. Using machine learning algorithms, we quantify the amount of information that brain regions contain about stimuli as the subject learns the coherent images. Our results show that several distinct regions, including high-level visual areas and the retinotopic cortex, contain more information about degraded stimuli with prior knowledge. Critically, these regions are separate from those that exhibit classical priming, indicating that top-down influences are more than feature-based attention. Together, our results show how the neural processing of complex imagery is rapidly influenced by fleeting experiences.
Kenneth T. Kishida, Brooks King-Casas, P. Read Montague
Neuron, Volume 67, Issue 4, 543-554, 26 August 2010
The pervasiveness of decision-making in every area of human endeavor highlights the importance of understanding choice mechanisms and their detailed relationship to underlying neurobiological function. This review surveys the recent and productive application of game-theoretic probes (economic games) to mental disorders. Such games typically possess concrete concepts of optimal play, thus providing quantitative ways to track when subjects' choices match or deviate from optimal. This feature equips economic games with natural classes of control signals that should guide learning and choice in the agents that play them. These signals and their underlying physical correlates in the brain are now being used to generate objective biomarkers that may prove useful for exposing and understanding the neurogenetic basis of normal and pathological human cognition. Thus, game-theoretic probes represent some of the first steps toward producing computationally principled, objective measures of cognitive function and dysfunction useful for the diagnosis, treatment, and understanding of mental disorders.
Neural Mechanisms Underlying Paradoxical Performance for Monetary Incentives Are Driven by Loss Aversion
Vikram S. Chib, Benedetto De Martino, Shinsuke Shimojo, John P. O'Doherty
Neuron, Volume 74, Issue 3, 582-594, 10 May 2012
Employers often make payment contingent on performance in order to motivate workers. We used fMRI with a novel incentivized skill task to examine the neural processes underlying behavioral responses to performance-based pay. We found that individuals' performance increased with increasing incentives; however, very high incentive levels led to the paradoxical consequence of worse performance. Between initial incentive presentation and task execution, striatal activity rapidly switched between activation and deactivation in response to increasing incentives. Critically, decrements in performance and striatal deactivations were directly predicted by an independent measure of behavioral loss aversion. These results suggest that incentives associated with successful task performance are initially encoded as a potential gain; however, when actually performing a task, individuals encode the potential loss that would arise from failure.
Alexander C. Schütz, Julia Trommershäuser, and Karl R. Gegenfurtner
PNAS May 8, 2012 vol. 109 no. 19 7547-7552
Humans shift their gaze to a new location several times per second. It is still unclear what determines where they look next. Fixation behavior is influenced by the low-level salience of the visual stimulus, such as luminance, contrast, and color, but also by high-level task demands and prior knowledge. Under natural conditions, different sources of information might conflict with each other and have to be combined. In our paradigm, we trade off visual salience against expected value. We show that both salience and value information influence the saccadic end point within an object, but with different time courses. The relative weights of salience and value are not constant but vary from eye movement to eye movement, depending critically on the availability of the value information at the time when the saccade is programmed. Short-latency saccades are determined mainly by salience, but value information is taken into account for long-latency saccades. We present a model that describes these data by dynamically weighting and integrating detailed topographic maps of visual salience and value. These results support the notion of independent neural pathways for the processing of visual information and value.
Marc Guitart-Masip, Rumana Chowdhury, Tali Sharot, Peter Dayan, Emrah Duzel, and Raymond J. Dolan
PNAS May 8, 2012 vol. 109 no. 19 7511-7516
Dopamine is widely observed to signal anticipation of future rewards and thus thought to be a key contributor to affectively charged decision making. However, the experiments supporting this view have not dissociated rewards from the actions that lead to, or are occasioned by, them. Here, we manipulated dopamine pharmacologically and examined the effect on a task that explicitly dissociates action and reward value. We show that dopamine enhanced the neural representation of rewarding actions, without significantly affecting the representation of reward value as such. Thus, increasing dopamine levels with levodopa selectively boosted striatal and substantia nigra/ventral tegmental representations associated with actions leading to reward, but not with actions leading to the avoidance of punishment. These findings highlight a key role for dopamine in the generation of appetitively motivated actions.
Monetary Loss Alters Perceptual Thresholds and Compromises Future Decisions via Amygdala and Prefrontal Networks
Offir Laufer and Rony Paz
J. Neurosci. 2012;32 6304-6311
The influence of monetary loss on decision making and choice behavior is extensively studied. However, the effect of loss on sensory perception is less explored. Here, we use conditioning in human subjects to explore how monetary loss associated with a pure tone can affect changes in perceptual thresholds for the previously neutral stimulus. We found that loss conditioning, when compared with neutral exposure, decreases sensitivity and increases perceptual thresholds (i.e., a relative increase in the just-noticeable-difference). This was so even when compared with gain conditioning of comparable intensity, suggesting that the finding is related to valence. We further show that these perceptual changes are related to future decisions about stimuli that are farther away from the conditioned one (wider generalization), resulting in overall increased and irrational monetary loss for the subjects. We use functional imaging to identify the neural network whose activity correlates with the deterioration in sensitivity on an individual basis. In addition, we show that activity in the amygdala was tightly correlated with the wider behavioral generalization, namely, when wrong decisions were made. We suggest that, in principle, less discrimination can be beneficial in loss scenarios, because it assures an accurate and fast response to stimuli that resemble the original stimulus and hence have a high likelihood of entailing the same outcome. But whereas this can be useful for primary reinforcers that can impact survival, it can also underlie wrong and costly behaviors in scenarios of contemporary life that involve secondary reinforcers.
Albert R. Powers, III, Matthew A. Hevey, and Mark T. Wallace
J. Neurosci. 2012;32 6263-6274
The brain's ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least 1 week after cessation of training. In the current study, we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding likely representing the substrate for a multisensory temporal binding window.
Thorsten Kahnt, Luke J. Chang, Soyoung Q Park, Jakob Heinzle, and John-Dylan Haynes
J. Neurosci. 2012;32 6240-6250
The primate orbitofrontal cortex (OFC) is involved in reward processing, learning, and decision making. Research in monkeys has shown that this region is densely connected with higher sensory, limbic, and subcortical regions. Moreover, a parcellation of the monkey OFC into two subdivisions has been suggested based on its intrinsic anatomical connections. However, in humans, little is known about any functional subdivisions of the OFC except for a rather coarse medial/lateral distinction. Here, we used resting-state fMRI in combination with unsupervised clustering techniques to investigate whether OFC subdivisions can be revealed based on their functional connectivity profiles with other brain regions. Examination of different cluster solutions provided support for a parcellation into two parts as observed in monkeys, but it also highlighted a much finer hierarchical clustering of the orbital surface. Specifically, we identified (1) a medial, (2) a posterior-central, (3) a central, and (4–6) three lateral clusters spanning the anterior–posterior gradient. Consistent with animal tracing studies, these OFC clusters were connected to other cortical regions such as prefrontal, temporal, and parietal cortices but also subcortical areas in the striatum and the midbrain. These connectivity patterns provide important implications for identifying specific functional roles of OFC subdivisions for reward processing, learning, and decision making. Moreover, this parcellation schema can provide guidance to report results in future studies.
Michael T. Treadway, Joshua W. Buckholtz, Ronald L. Cowan, Neil D. Woodward, Rui Li, M. Sib Ansari, Ronald M. Baldwin, Ashley N. Schwartzman, Robert M. Kessler, and David H. Zald
J. Neurosci. 2012;32 6170-6176
Preferences for different combinations of costs and benefits are a key source of variability in economic decision-making. However, the neurochemical basis of individual differences in these preferences is poorly understood. Studies in both animals and humans have demonstrated that direct manipulation of the neurotransmitter dopamine (DA) significantly impacts cost/benefit decision-making, but less is known about how naturally occurring variation in DA systems may relate to individual differences in economic behavior. In the present study, 25 healthy volunteers completed a dual-scan PET imaging protocol with [18F]fallypride and d-amphetamine to measure DA responsivity and separately completed the effort expenditure for rewards task, a behavioral measure of cost/benefit decision-making in humans. We found that individual differences in DA function in the left striatum and ventromedial prefrontal cortex were correlated with a willingness to expend greater effort for larger rewards, particularly when probability of reward receipt was low. Additionally, variability in DA responses in the bilateral insula was negatively correlated with willingness to expend effort for rewards, consistent with evidence implicating this region in the processing of response costs. These findings highlight the role of DA signaling in striatal, prefrontal, and insular regions as key neurochemical mechanisms underlying individual differences in cost/benefit decision-making.
Xue-Lian Qi, Travis Meyer, Terrence R. Stanford, and Christos Constantinidis
J. Neurosci. 2012;32 6161-6169
The lateral prefrontal cortex plays an important role in working memory and decision-making, although little is known about how neural correlates of these functions are shaped by learning. To understand the effect of learning on the neuronal representation of decision-making, we recorded single neurons from the lateral prefrontal cortex of monkeys before and after they were trained to judge whether two stimuli appeared at matching spatial locations. After training, and in agreement with previous studies, a population of neurons exhibited activity that was modulated depending on whether the second stimulus constituted a match or not, which had predictive ability for the monkey's choice. However, even before training, prefrontal neurons displayed modulation depending on the match or non-match status of a stimulus, with approximately equal percentages of neurons preferring a match or a non-match. The difference in firing rate and discriminability for match and non-match stimuli before training was of comparable magnitude as that after training. Changes observed after training involved an increase in the percentage of neurons exhibiting this effect, a greater proportion of neurons preferring non-match stimuli, and a greater percentage of neurons representing information about the first stimulus during the presentation of the second stimulus. Our results suggest that the neuronal activity representing some match/non-match judgments is present in the lateral prefrontal cortex even when subjects are not required to perform a comparison and before any training.
Stephen M. Fleming, Josefien Huijgen, and Raymond J. Dolan
J. Neurosci. 2012;32 6117-6125
Neuroscience has made considerable progress in understanding the neural substrates supporting cognitive performance in a number of domains, including memory, perception, and decision making. In contrast, how the human brain generates metacognitive awareness of task performance remains unclear. Here, we address this question by asking participants to perform perceptual decisions while providing concurrent metacognitive reports during fMRI scanning. We show that activity in right rostrolateral prefrontal cortex (rlPFC) satisfies three constraints for a role in metacognitive aspects of decision-making. Right rlPFC showed greater activity during self-report compared to a matched control condition, activity in this region correlated with reported confidence, and the strength of the relationship between activity and confidence predicted metacognitive ability across individuals. In addition, functional connectivity between right rlPFC and both contralateral PFC and visual cortex increased during metacognitive reports. We discuss these findings in a theoretical framework where rlPFC re-represents object-level decision uncertainty to facilitate metacognitive report.