Rejection of unfair offers in the ultimatum game is no evidence of strong reciprocity

Toshio Yamagishi, Yutaka Horita, Nobuhiro Mifune, Hirofumi Hashimoto, Yang Li, Mizuho Shinada, Arisa Miura, Keigo Inukai, Haruto Takagishi, and Dora Simunovic
Published online before print November 27, 2012, doi: 10.1073/pnas.1212126109
PNAS November 27, 2012 201212126



The strong reciprocity model of the evolution of human cooperation has gained some acceptance, partly on the basis of support from experimental findings. The observation that unfair offers in the ultimatum game are frequently rejected constitutes an important piece of the experimental evidence for strong reciprocity. In the present study, we have challenged the idea that the rejection response in the ultimatum game provides evidence of the assumption held by strong reciprocity theorists that negative reciprocity observed in the ultimatum game is inseparably related to positive reciprocity as the two sides of a preference for fairness. The prediction of an inseparable relationship between positive and negative reciprocity was rejected on the basis of the results of a series of experiments that we conducted using the ultimatum game, the dictator game, the trust game, and the prisoner’s dilemma game. We did not find any correlation between the participants’ tendencies to reject unfair offers in the ultimatum game and their tendencies to exhibit various prosocial behaviors in the other games, including their inclinations to positively reciprocate in the trust game. The participants’ responses to postexperimental questions add support to the view that the rejection of unfair offers in the ultimatum game is a tacit strategy for avoiding the imposition of an inferior status.

Developing Intuition: Neural Correlates of Cognitive-Skill Learning in Caudate Nucleus

Xiaohong Wan, Daisuke Takano, Takeshi Asamizuya, Chisato Suzuki, Kenichi Ueno, Kang Cheng, Takeshi Ito, and Keiji Tanaka
J. Neurosci. 2012;32 17492-17501 Open Access

将棋プロジェクト。「プロ棋士は(アマに比べ)直観的に意思決定を行い、その際は尾状核が活動する」が、初心者が将棋のトレーニング(15週間)をすると?尾状核の活動が増加する(他の皮質活動は変化しない)。尾状核は直観的な意思決定に重要。 http://www.jneurosci.org/content/32/48/17492

正直言うと、前回のScience (Wang et al., 2011)より今回の方が良い研究だと思うけど…色々と難しい…

The superior capability of cognitive experts largely depends on automatic, quick information processing, which is often referred to as intuition. Intuition develops following extensive long-term training. There are many cognitive models on intuition development, but its neural basis is not known. Here we trained novices for 15 weeks to learn a simple board game and measured their brain activities in early and end phases of the training while they quickly generated the best next-move to a given board pattern. We found that the activation in the head of caudate nucleus developed over the course of training, in parallel to the development of the capability to quickly generate the best next-move, and the magnitude of the caudate activity was correlated with the subject's performance. In contrast, cortical activations, which already appeared in the early phase of training, did not further change. Thus, neural activation in the caudate head, but not those in cortical areas, tracked the development of capability to quickly generate the best next-move, indicating that circuitries including the caudate head may automate cognitive computations.


Stimulus-Related Activity during Conditional Associations in Monkey Perirhinal Cortex Neurons Depends on Upcoming Reward Outcome

Kaoru Ohyama, Yasuko Sugase-Miyamoto, Narihisa Matsumoto, Munetaka Shidara, and Chikara Sato
J. Neurosci. 2012;32 17407-17419

Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

Distributed Representations of Rule Identity and Rule Order in Human Frontal Cortex and Striatum

Carlo Reverberi, Kai Gorgen, and John-Dylan Haynes
J. Neurosci. 2012;32 17420-17430

Humans are able to flexibly devise and implement rules to reach their desired goals. For simple situations, we can use single rules, such as “if traffic light is green then cross the street.” In most cases, however, more complex rule sets are required, involving the integration of multiple layers of control. Although it has been shown that prefrontal cortex is important for rule representation, it has remained unclear how the brain encodes more complex rule sets. Here, we investigate how the brain represents the order in which different parts of a rule set are evaluated. Participants had to follow compound rule sets that involved the concurrent application of two single rules in a specific order, where one of the rules always had to be evaluated first. The rules and their assigned order were independently manipulated. By applying multivariate decoding to fMRI data, we found that the identity of the current rule was encoded in a frontostriatal network involving right ventrolateral prefrontal cortex, right superior frontal gyrus, and dorsal striatum. In contrast, rule order could be decoded in the dorsal striatum and in the right premotor cortex. The nonhomogeneous distribution of information across brain areas was confirmed by follow-up analyses focused on relevant regions of interest. We argue that the brain encodes complex rule sets by “decomposing” them in their constituent features, which are represented in different brain areas, according to the aspect of information to be maintained.


Speaker–listener neural coupling underlies successful communication

Greg J. Stephens, Lauren J. Silbert, and Uri Hasson
PNAS August 10, 2010 vol. 107 no. 32 14425-14430


Verbal communication is a joint activity; however, speech production and comprehension have primarily been analyzed as independent processes within the boundaries of individual brains. Here, we applied fMRI to record brain activity from both speakers and listeners during natural verbal communication. We used the speaker's spatiotemporal brain activity to model listeners’ brain activity and found that the speaker's activity is spatially and temporally coupled with the listener's activity. This coupling vanishes when participants fail to communicate. Moreover, though on average the listener's brain activity mirrors the speaker's activity with a delay, we also find areas that exhibit predictive anticipatory responses. We connected the extent of neural coupling to a quantitative measure of story comprehension and find that the greater the anticipatory speaker–listener coupling, the greater the understanding. We argue that the observed alignment of production- and comprehension-based processes serves as a mechanism by which brains convey information.


Orbitofrontal Cortex Supports Behavior and Learning Using Inferred But Not Cached Values

Joshua L. Jones, Guillem R. Esber, Michael A. McDannald, Aaron J. Gruber, Alex Hernandez, Aaron Mirenzi, Geoffrey Schoenbaum
Science 16 November 2012: Vol. 338 no. 6109 pp. 953-956


「刺激A→B、C→D」を学習後、「B→報酬、D→無報酬」を学習。OFCの活動を抑制されたラットは「B、D対呈示」ではBに反応するが、「A、C対呈示」ではどちらにも反応せず(健常なラットはAに反応)。OFCは「推論に基づく価値」処理に関係。 http://www.sciencemag.org/content/338/6109/953

Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling “economic” value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.

The Primate Ventral Pallidum Encodes Expected Reward Value and Regulates Motor Action

Yoshihisa Tachibana, Okihide Hikosaka
Neuron, Volume 76, Issue 4, 826-837, 21 November 2012

Motor actions are facilitated when expected reward value is high. It is hypothesized that there are neurons that encode expected reward values to modulate impending actions and potentially represent motivation signals. Here, we present evidence suggesting that the ventral pallidum (VP) may participate in this process. We recorded single neuronal activity in the monkey VP using a saccade task with a direction-dependent reward bias. Depending on the amount of the expected reward, VP neurons increased or decreased their activity tonically until the reward was delivered, for both ipsiversive and contraversive saccades. Changes in expected reward values were also associated with changes in saccade performance (latency and velocity). Furthermore, bilateral muscimol-induced inactivation of the VP abolished the reward-dependent changes in saccade latencies. These data suggest that the VP provides expected reward value signals that are used to facilitate or inhibit motor actions.

NMDA Receptors Control Cue-Outcome Selectivity and Plasticity of Orbitofrontal Firing Patterns during Associative Stimulus-Reward Learning

Marijn van Wingerden, Martin Vinck, Vincent Tijms, Irene R.S. Ferreira, Allert J. Jonker, Cyriel M.A. Pennartz
Neuron, Volume 76, Issue 4, 813-825, 21 November 2012

Neural activity in orbitofrontal cortex has been linked to flexible representations of stimulus-outcome associations. Such value representations are known to emerge with learning, but the neural mechanisms supporting this phenomenon are not well understood. Here, we provide evidence for a causal role for NMDA receptors (NMDARs) in mediating spike pattern discriminability, neural plasticity, and rhythmic synchronization in relation to evaluative stimulus processing and decision making. Using tetrodes, single-unit spike trains and local field potentials were recorded during local, unilateral perfusion of an NMDAR blocker in rat OFC. In the absence of behavioral effects, NMDAR blockade severely hampered outcome-selective spike pattern formation to olfactory cues, relative to control perfusions. Moreover, NMDAR blockade shifted local rhythmic synchronization to higher frequencies and degraded its linkage to stimulus-outcome selective coding. These results demonstrate the importance of NMDARs for cue-outcome associative coding in OFC during learning and illustrate how NMDAR blockade disrupts network dynamics.

Rhythmic Fluctuations in Evidence Accumulation during Decision Making in the Human Brain

Valentin Wyart, Vincent de Gardelle, Jacqueline Scholl, Christopher Summerfield
Neuron, Volume 76, Issue 4, 847-858, 21 November 2012

Categorical choices are preceded by the accumulation of sensory evidence in favor of one action or another. Current models describe evidence accumulation as a continuous process occurring at a constant rate, but this view is inconsistent with accounts of a psychological refractory period during sequential information processing. During multisample perceptual categorization, we found that the neural encoding of momentary evidence in human electrical brain signals and its subsequent impact on choice fluctuated rhythmically according to the phase of ongoing parietal delta oscillations (1–3 Hz). By contrast, lateralized beta-band power (10–30 Hz) overlying human motor cortex encoded the integrated evidence as a response preparation signal. These findings draw a clear distinction between central and motor stages of perceptual decision making, with successive samples of sensory evidence competing to pass through a serial processing bottleneck before being mapped onto action.

Positively Biased Processing of Self-Relevant Social Feedback

Christoph W. Korn, Kristin Prehn, Soyoung Q. Park, Henrik Walter, and Hauke
R. Heekeren
J. Neurosci. 2012;32 16832-16844

Receiving social feedback such as praise or blame for one's character traits is a key component of everyday human interactions. It has been proposed that humans are positively biased when integrating social feedback into their self-concept. However, a mechanistic description of how humans process self-relevant feedback is lacking. Here, participants received feedback from peers after a real-life interaction. Participants processed feedback in a positively biased way, i.e., they changed their self-evaluations more toward desirable than toward undesirable feedback. Using functional magnetic resonance imaging we investigated two feedback components. First, the reward-related component correlated with activity in ventral striatum and in anterior cingulate cortex/medial prefrontal cortex (ACC/MPFC). Second, the comparison-related component correlated with activity in the mentalizing network, including the MPFC, the temporoparietal junction, the superior temporal sulcus, the temporal pole, and the inferior frontal gyrus. This comparison-related activity within the mentalizing system has a parsimonious interpretation, i.e., activity correlated with the differences between participants' own evaluation and feedback. Importantly, activity within the MPFC that integrated reward-related and comparison-related components predicted the self-related positive updating bias across participants offering a mechanistic account of positively biased feedback processing. Thus, theories on both reward and mentalizing are important for a better understanding of how social information is integrated into the human self-concept.

Perceptual Criteria in the Human Brain

Corey N. White, Jeanette A. Mumford, and Russell A. Poldrack
J. Neurosci. 2012;32 16716-16724

A critical component of decision making is the ability to adjust criteria for classifying stimuli. fMRI and drift diffusion models were used to explore the neural representations of perceptual criteria in decision making. The specific focus was on the relative engagement of perceptual- and decision-related neural systems in response to adjustments in perceptual criteria. Human participants classified visual stimuli as big or small based on criteria of different sizes, which effectively biased their choices toward one response over the other. A drift diffusion model was fit to the behavioral data to extract estimates of stimulus size, criterion size, and difficulty for each participant and condition. These parameter values were used as modulated regressors to create a highly constrained model for the fMRI analysis that accounted for several components of the decision process. The results show that perceptual criteria values were reflected by activity in left inferior temporal cortex, a region known to represent objects and their physical properties, whereas stimulus size was reflected by activation in occipital cortex. A frontoparietal network of regions, including dorsolateral prefrontal cortex and superior parietal lobule, corresponded to the decision variables resulting from the downstream stimulus–criterion comparison, independent of stimulus type. The results provide novel evidence that perceptual criteria are represented in stimulus space and serve as inputs to be compared with the presented stimulus, recruiting a common network of decision regions shown to be active in other simple decisions. This work advances our understanding of the neural correlates of decision flexibility and adjustments of behavioral bias.

Neural Correlates of Anticipation Risk Reflect Risk Preferences

Sarah Rudorf, Kerstin Preuschoff, and Bernd Weber
J. Neurosci. 2012;32 16683-16692

Individual risk preferences have a large influence on decisions, such as financial investments, career and health choices, or gambling. Decision making under risk has been studied both behaviorally and on a neural level. It remains unclear, however, how risk attitudes are encoded and integrated with choice. Here, we investigate how risk preferences are reflected in neural regions known to process risk. We collected functional magnetic resonance images of 56 human subjects during a gambling task (Preuschoff et al., 2006). Subjects were grouped into risk averters and risk seekers according to the risk preferences they revealed in a separate lottery task. We found that during the anticipation of high-risk gambles, risk averters show stronger responses in ventral striatum and anterior insula compared to risk seekers. In addition, risk prediction error signals in anterior insula, inferior frontal gyrus, and anterior cingulate indicate that risk averters do not dissociate properly between gambles that are more or less risky than expected. We suggest this may result in a general overestimation of prospective risk and lead to risk avoidance behavior. This is the first study to show that behavioral risk preferences are reflected in the passive evaluation of risky situations. The results have implications on public policies in the financial and health domain.

Precedence of the Eye Region in Neural Processing of Faces

Elias B. Issa and James J. DiCarlo
J. Neurosci. 2012;32 16666-16682

Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of “face-selective” cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features—consistent with parts-based models—grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy.

Robust Representation of Stable Object Values in the Oculomotor Basal Ganglia

Masaharu Yasuda, Shinya Yamamoto, and Okihide Hikosaka
J. Neurosci. 2012;32 16917-16932 Open Access

Our gaze tends to be directed to objects previously associated with rewards. Such object values change flexibly or remain stable. Here we present evidence that the monkey substantia nigra pars reticulata (SNr) in the basal ganglia represents stable, rather than flexible, object values. After across-day learning of object–reward association, SNr neurons gradually showed a response bias to surprisingly many visual objects: inhibition to high-valued objects and excitation to low-valued objects. Many of these neurons were shown to project to the ipsilateral superior colliculus. This neuronal bias remained intact even after >100 d without further learning. In parallel with the neuronal bias, the monkeys tended to look at high-valued objects. The neuronal and behavioral biases were present even if no value was associated during testing. These results suggest that SNr neurons bias the gaze toward objects that were consistently associated with high values in one's history.

Dynamic Fluctuations in Dopamine Efflux in the Prefrontal Cortex and Nucleus Accumbens during Risk-Based Decision Making

Jennifer R. St. Onge, Soyon Ahn, Anthony G. Phillips, and Stan B. Floresco
J. Neurosci. 2012;32 16880-16891

Mesocorticolimbic dopamine (DA) has been implicated in cost/benefit decision making about risks and rewards. The prefrontal cortex (PFC) and nucleus accumbens (NAc) are two DA terminal regions that contribute to decision making in distinct manners. However, how fluctuations of tonic DA levels may relate to different aspects of decision making remains to be determined. The present study measured DA efflux in the PFC and NAc with microdialysis in well trained rats performing a probabilistic discounting task. Selection of a small/certain option always delivered one pellet, whereas another, large/risky option yielded four pellets, with probabilities that decreased (100–12.5%) or increased (12.5–100%) across four blocks of trials. Yoked-reward groups were also included to control for reward delivery. PFC DA efflux during decision making decreased or increased over a session, corresponding to changes in large/risky reward probabilities. Similar profiles were observed from yoked-rewarded rats, suggesting that fluctuations in PFC DA reflect changes in the relative rate of reward received. NAc DA efflux also showed decreasing/increasing trends over the session during both tasks. However, DA efflux was higher during decision making on free- versus forced-choice trials and during periods of greater reward uncertainty. Moreover, changes in NAc DA closely tracked shifts in choice biases. These data reveal dynamic and dissociable fluctuations in PFC and NAc DA transmission associated with different aspects of risk-based decision making. PFC DA may signal changes in reward availability that facilitates modification of choice biases, whereas NAc DA encodes integrated signals about reward rates, uncertainty, and choice, reflecting implementation of decision policies.


How Glitter Relates to Gold: Similarity-Dependent Reward Prediction Errors in the Human Striatum

Thorsten Kahnt, Soyoung Q Park, Christopher J. Burke, and Philippe N. Tobler
J. Neurosci. 2012;32 16521-16529

Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus–striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.


Action-Specific Value Signals in Reward-Related Regions of the Human Brain

Thomas H. B. FitzGerald, Karl J. Friston, and Raymond J. Dolan
J. Neurosci. 2012;32 16417-16423

Estimating the value of potential actions is crucial for learning and adaptive behavior. We know little about how the human brain represents action-specific value outside of motor areas. This is, in part, due to a difficulty in detecting the neural correlates of value using conventional (region of interest) functional magnetic resonance imaging (fMRI) analyses, due to a potential distributed representation of value. We address this limitation by applying a recently developed multivariate decoding method to high-resolution fMRI data in subjects performing an instrumental learning task. We found evidence for action-specific value signals in circumscribed regions, specifically ventromedial prefrontal cortex, putamen, thalamus, and insula cortex. In contrast, action-independent value signals were more widely represented across a large set of brain areas. Using multivariate Bayesian model comparison, we formally tested whether value–specific responses are spatially distributed or coherent. We found strong evidence that both action-specific and action-independent value signals are represented in a distributed fashion. Our results suggest that a surprisingly large number of classical reward-related areas contain distributed representations of action-specific values, representations that are likely to mediate between reward and adaptive behavior.

Reward Stability Determines the Contribution of Orbitofrontal Cortex to Adaptive Behavior

Justin S. Riceberg and Matthew L. Shapiro
J. Neurosci. 2012;32 16402-16409

Animals respond to changing contingencies to maximize reward. The orbitofrontal cortex (OFC) is important for flexible responding when established contingencies change, but the underlying cognitive mechanisms are debated. We tested rats with sham or OFC lesions in radial maze tasks that varied the frequency of contingency changes and measured both perseverative and non-perseverative errors. When contingencies were changed rarely, rats with sham lesions learned quickly and performed better than rats with OFC lesions. Rats with sham lesions made fewer non-perseverative errors, rarely entering non-rewarded arms, and more win–stay responses by returning to recently rewarded arms compared with rats with OFC lesions. When contingencies were changed rapidly, however, rats with sham lesions learned slower, made more non-perseverative errors and fewer lose–shift responses, and returned more often to non-rewarded arms than rats with OFC lesions. The results support the view that the OFC integrates reward history and suggest that the availability of outcome expectancy signals can either improve or impair adaptive responding depending on reward stability.


Some Consequences of Having Too Little

Anuj K. Shah, Sendhil Mullainathan, Eldar Shafir
Science 2 November 2012:
Vol. 338 no. 6107 pp. 682-685
DOI: 10.1126/science.1222426

サイエンス誌から。貧乏な人が「さらに貧乏になるような振る舞い(過度の借金など)」をしてしまうのは、目の前の問題に集中し過ぎてしまうから。「金銭的な貧乏」に限らず、「時間的な貧乏」にも同じ事が言える。 http://www.sciencemag.org/content/338/6107/682

Poor individuals often engage in behaviors, such as excessive borrowing, that reinforce the conditions of poverty. Some explanations for these behaviors focus on personality traits of the poor. Others emphasize environmental factors such as housing or financial access. We instead consider how certain behaviors stem simply from having less. We suggest that scarcity changes how people allocate attention: It leads them to engage more deeply in some problems while neglecting others. Across several experiments, we show that scarcity leads to attentional shifts that can help to explain behaviors such as overborrowing. We discuss how this mechanism might also explain other puzzles of poverty.


The Emergence and Representation of Knowledge about Social and Nonsocial Hierarchies

Dharshan Kumaran, Hans Ludwig Melo, Emrah Duzel
Neuron, Volume 76, Issue 3, 653-666, 8 November 2012

「社会的階層の学習」と扁桃体。扁桃体の活動と学習の進み具合が相関する。また、学習成績の個人差は扁桃体の大きさで説明できる。一方、海馬は社会的階層に限らず「一般的な順位の学習」に関与する。 http://www.cell.com/neuron/abstract/S0896-6273(12)00889-6

Primates are remarkably adept at ranking each other within social hierarchies, a capacity that is critical to successful group living. Surprisingly little, however, is understood about the neurobiology underlying this quintessential aspect of primate cognition. In our experiment, participants first acquired knowledge about a social and a nonsocial hierarchy and then used this information to guide investment decisions. We found that neural activity in the amygdala tracked the development of knowledge about a social, but not a nonsocial, hierarchy. Further, structural variations in amygdala gray matter volume accounted for interindividual differences in social transitivity performance. Finally, the amygdala expressed a neural signal selectively coding for social rank, whose robustness predicted the influence of rank on participants’ investment decisions. In contrast, we observed that the linear structure of both social and nonsocial hierarchies was represented at a neural level in the hippocampus. Our study implicates the amygdala in the emergence and representation of knowledge about social hierarchies and distinguishes the domain-general contribution of the hippocampus.

Neural Mechanisms of Speed-Accuracy Tradeoff

Richard P. Heitz, Jeffrey D. Schall
Neuron, Volume 76, Issue 3, 616-628, 8 November 2012

Intelligent agents balance speed of responding with accuracy of deciding. Stochastic accumulator models commonly explain this speed-accuracy tradeoff by strategic adjustment of response threshold. Several laboratories identify specific neurons in prefrontal and parietal cortex with this accumulation process, yet no neurophysiological correlates of speed-accuracy tradeoff have been described. We trained macaque monkeys to trade speed for accuracy on cue during visual search and recorded the activity of neurons in the frontal eye field. Unpredicted by any model, we discovered that speed-accuracy tradeoff is accomplished through several distinct adjustments. Visually responsive neurons modulated baseline firing rate, sensory gain, and the duration of perceptual processing. Movement neurons triggered responses with activity modulated in a direction opposite of model predictions. Thus, current stochastic accumulator models provide an incomplete description of the neural processes accomplishing speed-accuracy tradeoffs. The diversity of neural mechanisms was reconciled with the accumulator framework through an integrated accumulator model constrained by requirements of the motor system.

Inactivating Anterior Insular Cortex Reduces Risk Taking

Hironori Ishii, Shinya Ohara, Philippe N. Tobler, Ken-Ichiro Tsutsui, and Toshio Iijima
J. Neurosci. 2012;32 16031-16039

前島皮質(眼窩前頭皮質)の活動を抑制されたラットはリスク回避(愛好)的になる。なお、リスクに関係ない実験課題における行動は変化しない。前島皮質と眼窩前頭皮質はどちらもリスク下の意思決定に重要な役割を果たすが、その働きは逆の効果を持つ。 http://www.jneurosci.org/content/32/45/16031

We often have to make risky decisions between alternatives with outcomes that can be better or worse than the outcomes of safer alternatives. Although previous studies have implicated various brain regions in risky decision making, it remains unknown which regions are crucial for balancing whether to take a risk or play it safe. Here, we focused on the anterior insular cortex (AIC), the causal involvement of which in risky decision making is still unclear, although human imaging studies have reported AIC activation in various gambling tasks. We investigated the effects of temporarily inactivating the AIC on rats' risk preference in two types of gambling tasks, one in which risk arose in reward amount and one in which it arose in reward delay. As a control within the same subjects, we inactivated the adjacent orbitofrontal cortex (OFC), which is well known to affect risk preference. In both gambling tasks, AIC inactivation decreased risk preference whereas OFC inactivation increased it. In risk-free control situations, AIC and OFC inactivations did not affect decision making. These results suggest that the AIC is causally involved in risky decision making and promotes risk taking. The AIC and OFC may be crucial for the opposing motives of whether to take a risk or avoid it.

Differential Reward Coding in the Subdivisions of the Primate Caudate during an Oculomotor Task

Kae Nakamura, Gustavo S. Santos, Ryuichi Matsuzaki, and Hiroyuki Nakahara
J. Neurosci. 2012;32 15963-15982


The basal ganglia play a pivotal role in reward-oriented behavior. The striatum, an input channel of the basal ganglia, is composed of subdivisions that are topographically connected with different cortical and subcortical areas. To test whether reward information is differentially processed in the different parts of the striatum, we compared reward-related neuronal activity along the dorsolateral–ventromedial axis in the caudate nucleus of monkeys performing an asymmetrically rewarded oculomotor task. In a given block, a target in one position was associated with a large reward, whereas the other target was associated with a small reward. The target position–reward value contingency was switched between blocks. We found the following: (1) activity that reflected the block-wise reward contingency emerged before the appearance of a visual target, and it was more prevalent in the dorsal, rather than central and ventral, caudate; (2) activity that was positively related to the reward size of the current trial was evident, especially after reward delivery, and it was more prevalent in the ventral and central, rather than dorsal, caudate; and (3) activity that was modulated by the memory of the outcomes of the previous trials was evident in the dorsal and central caudate. This multiple reward information, together with the target-direction information, was represented primarily by individual caudate neurons, and the different reward information was represented in caudate subpopulations with distinct electrophysiological properties, e.g., baseline firing and spike width. These results suggest parallel processing of different reward information by the basal ganglia subdivisions defined by extrinsic connections and intrinsic properties.

Dorsomedial Prefrontal Cortex Mediates Rapid Evaluations Predicting the Outcome of Romantic Interactions

Jeffrey C. Cooper, Simon Dunne, Teresa Furey, and John P. O'Doherty
J. Neurosci. 2012;32 15647-15656

Humans frequently make real-world decisions based on rapid evaluations of minimal information; for example, should we talk to an attractive stranger at a party? Little is known, however, about how the brain makes rapid evaluations with real and immediate social consequences. To address this question, we scanned participants with functional magnetic resonance imaging (fMRI) while they viewed photos of individuals that they subsequently met at real-life “speed-dating” events. Neural activity in two areas of dorsomedial prefrontal cortex (DMPFC), paracingulate cortex, and rostromedial prefrontal cortex (RMPFC) was predictive of whether each individual would be ultimately pursued for a romantic relationship or rejected. Activity in these areas was attributable to two distinct components of romantic evaluation: either consensus judgments about physical beauty (paracingulate cortex) or individualized preferences based on a partner's perceived personality (RMPFC). These data identify novel computational roles for these regions of the DMPFC in even very rapid social evaluations. Even a first glance, then, can accurately predict romantic desire, but that glance involves a mix of physical and psychological judgments that depend on specific regions of DMPFC.