2015年7月15日水曜日

Reward Pays the Cost of Noise Reduction in Motor and Cognitive Control

Sanjay G. Manohar, Trevor T.-J. Chong, Matthew A.J. Apps, Amit Batla, Maria Stamelou, Paul R. Jarman, Kailash P. Bhatia, Masud Husain

Current Biology
Volume 25, Issue 13, p1707–1716, 29 June 2015

報酬を与えることで、被験者は(コストを掛けて)運動/知覚のノイズを軽減し、それが運動のスピードと正確さの両方を改善する。

Speed-accuracy trade-off is an intensively studied law governing almost all behavioral tasks across species. Here we show that motivation by reward breaks this law, by simultaneously invigorating movement and improving response precision. We devised a model to explain this paradoxical effect of reward by considering a new factor: the cost of control. Exerting control to improve response precision might itself come at a cost—a cost to attenuate a proportion of intrinsic neural noise. Applying a noise-reduction cost to optimal motor control predicted that reward can increase both velocity and accuracy. Similarly, application to decision-making predicted that reward reduces reaction times and errors in cognitive control. We used a novel saccadic distraction task to quantify the speed and accuracy of both movements and decisions under varying reward. Both faster speeds and smaller errors were observed with higher incentives, with the results best fitted by a model including a precision cost. Recent theories consider dopamine to be a key neuromodulator in mediating motivational effects of reward. We therefore examined how Parkinson’s disease (PD), a condition associated with dopamine depletion, alters the effects of reward. Individuals with PD showed reduced reward sensitivity in their speed and accuracy, consistent in our model with higher noise-control costs. Including a cost of control over noise explains how reward may allow apparent performance limits to be surpassed. On this view, the pattern of reduced reward sensitivity in PD patients can specifically be accounted for by a higher cost for controlling noise.

2015年7月14日火曜日

Single-trial spike trains in parietal cortex reveal discrete steps during decision-making

Kenneth W. Latimer, Jacob L. Yates, Miriam L. R. Meister, Alexander C. Huk, Jonathan W. Pillow
Science 10 July 2015: Vol. 349 no. 6244 pp. 184-187

知覚的意思決定において、LIPニューロンの活動は「連続的に徐々に蓄積される情報を反映している」とされていたが、むしろ「情報の蓄積に対して離散的(情報が蓄積するとジャンプが起きる)に反応する」ことが分かった。

Neurons in the macaque lateral intraparietal (LIP) area exhibit firing rates that appear to ramp upward or downward during decision-making. These ramps are commonly assumed to reflect the gradual accumulation of evidence toward a decision threshold. However, the ramping in trial-averaged responses could instead arise from instantaneous jumps at different times on different trials. We examined single-trial responses in LIP using statistical methods for fitting and comparing latent dynamical spike-train models. We compared models with latent spike rates governed by either continuous diffusion-to-bound dynamics or discrete “stepping” dynamics. Roughly three-quarters of the choice-selective neurons we recorded were better described by the stepping model. Moreover, the inferred steps carried more information about the animal’s choice than spike counts.

2015年7月13日月曜日

Oxytocin Mediates Entrainment of Sensory Stimuli to Social Cues of Opposing Valence

Han Kyoung Choe, Michael Douglas Reed, Nora Benavidez, Daniel Montgomery, Natalie Soares, Yeong Shin Yim, Gloria B. Choi
Neuron. Volume 87, Issue 1, 1 July 2015, Pages 152–163

オキシトシンは「刺激と社会的報酬(異性に近付ける)の連合学習」に重要な役割を果たす。

Meaningful social interactions modify behavioral responses to sensory stimuli. The neural mechanisms underlying the entrainment of neutral sensory stimuli to salient social cues to produce social learning remain unknown. We used odor-driven behavioral paradigms to ask if oxytocin, a neuropeptide implicated in various social behaviors, plays a crucial role in the formation of learned associations between odor and socially significant cues. Through genetic, optogenetic, and pharmacological manipulations, we show that oxytocin receptor signaling is crucial for entrainment of odor to social cues but is dispensable for entrainment to nonsocial cues. Furthermore, we demonstrate that oxytocin directly impacts the piriform, the olfactory sensory cortex, to mediate social learning. Lastly, we provide evidence that oxytocin plays a role in both appetitive and aversive social learning. These results suggest that oxytocin conveys saliency of social stimuli to sensory representations in the piriform cortex during odor-driven social learning.

2015年6月29日月曜日

Orbitofrontal lesions eliminate signalling of biological significance in cue-responsive ventral striatal neurons

Nisha K. Cooch, Thomas A. Stalnaker, Heather M. Wied, Sheena Bali-Chaudhary, Michael A. McDannald, Tzu-Lan Liu & Geoffrey Schoenbaum
Nature Communications 6, Article number: 7195

ラットの腹側線条体(ventral striatum)は「報酬を示す手掛かり刺激」に対してその報酬量に比例して反応することが知られているが、眼窩前頭前野(orbitofrontal cortex, OFC)を破壊すると、手掛かり刺激そのものには反応するがその報酬量への反応は消える。

The ventral striatum has long been proposed as an integrator of biologically significant associative information to drive actions. Although inputs from the amygdala and hippocampus have been much studied, the role of prominent inputs from orbitofrontal cortex (OFC) are less well understood. Here, we recorded single-unit activity from ventral striatum core in rats with sham or ipsilateral neurotoxic lesions of lateral OFC, as they performed an odour-guided spatial choice task. Consistent with prior reports, we found that spiking activity recorded in sham rats during cue sampling was related to both reward magnitude and reward identity, with higher firing rates observed for cues that predicted more reward. Lesioned rats also showed differential activity to the cues, but this activity was unbiased towards larger rewards. These data support a role for OFC in shaping activity in the ventral striatum to represent the biological significance of associative information in the environment.

2015年6月26日金曜日

A Sensitive and Specific Neural Signature for Picture-Induced Negative Affect

Chang LJ, Gianaros PJ, Manuck SB, Krishnan A, Wager TD (2015)
PLoS Biol 13(6): e1002180.

写真を見た際の(負の)感情評価を「全脳のボクセルを対象にしたfMRI MVPA(実際に効いているのは全ボクセルの1.6%)」を用いて高精度(正解率90%以上)に予測できる。 http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002180

Neuroimaging has identified many correlates of emotion but has not yet yielded brain representations predictive of the intensity of emotional experiences in individuals. We used machine learning to identify a sensitive and specific signature of emotional responses to aversive images. This signature predicted the intensity of negative emotion in individual participants in cross validation (n =121) and test (n = 61) samples (high–low emotion = 93.5% accuracy). It was unresponsive to physical pain (emotion–pain = 92% discriminative accuracy), demonstrating that it is not a representation of generalized arousal or salience. The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience. Furthermore, it was not reducible to activity in traditional “emotion-related” regions (e.g., amygdala, insula) or resting-state networks (e.g., “salience,” “default mode”). Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.

2015年6月25日木曜日

Signatures of Value Comparison in Ventral Striatum Neurons

Strait CE, Sleezer BJ, Hayden BY (2015)
PLoS Biol 13(6): e1002173.

価値に基づく意思決定において、腹側線条体と腹内側前頭前野のニューロンは同じような報酬情報をコードしている。 http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002173

The ventral striatum (VS), like its cortical afferents, is closely associated with processing of rewards, but the relative contributions of striatal and cortical reward systems remains unclear. Most theories posit distinct roles for these structures, despite their similarities. We compared responses of VS neurons to those of ventromedial prefrontal cortex (vmPFC) Area 14 neurons, recorded in a risky choice task. Five major response patterns observed in vmPFC were also observed in VS: (1) offer value encoding, (2) value difference encoding, (3) preferential encoding of chosen relative to unchosen value, (4) a correlation between residual variance in responses and choices, and (5) prominent encoding of outcomes. We did observe some differences as well; in particular, preferential encoding of the chosen option was stronger and started earlier in VS than in vmPFC. Nonetheless, the close match between vmPFC and VS suggests that cortex and its striatal targets make overlapping contributions to economic choice.

2015年5月13日水曜日

Amazon.comで “Amazon Prime” を解約して返金してほしい

Amazon Prime、無料期間だけ使うつもりが解約を忘れてた。
$100引き落とされてビビったけど、解約したら無事に返金された(無料期間後に未使用の場合のみ)。
危なかった…

解約は以下の手順で簡単にできました。
http://www.amazon.com/gp/help/customer/display.html?nodeId=201118010
課金されてから一度も特典を使用していない場合は返金(refund)されます。

2015年5月6日水曜日

続・J1ビザ三年目の確定申告(Tax Return)

J1ビザ三年目の確定申告(Tax Return)」の記事の続きです。

妻のITIN(納税者番号)が届いたので、残りの処理をやります。
それは、
Report of Foreign Bank and Financial Accounts (FBAR)
(日本語に訳すと、海外銀行口座残高の申告?)
と呼ばれるものです。

ちなみに参考にしたのは以下のページです。
“アメリカ在住者必読!海外銀行口座残高を申告する Form 90-22.1 で大慌てした件”
http://www.theuslife.net/report-of-foreign-bank/

申告は、
http://bsaefiling.fincen.treas.gov/main.html
からオンラインでできます。
面倒くさいですが、難しくはなかったですね。
(ちなみに、自分と妻の分は別々にやりました)

これで、本当に終わりだと…
信じています…

2015年4月22日水曜日

Neural mechanisms underlying human consensus decision-making

Caltechに来て始めた研究がNeuron誌から出版されました!

"Neural mechanisms underlying human consensus decision-making"
Shinsuke Suzuki, Ryo Adachi, Simon Dunne, Peter Bossaerts, John P. O’Doherty.
Neuron, Volume 86, Issue 2, p591–602, 22 April 2015
http://www.cell.com/neuron/abstract/S0896-6273(15)00215-9

ヒトの集団的意思決定(グループ内での合意形成)の計算論モデルとその神経基盤を示しました。
同業者の皆様、ぜひ引用してやって下さい。

Consensus building in a group is a hallmark of animal societies, yet little is known about its underlying computational and neural mechanisms. Here, we applied a computational framework to behavioral and fMRI data from human participants performing a consensus decision-making task with up to five other participants. We found that participants reached consensus decisions through integrating their own preferences with information about the majority group members’ prior choices, as well as inferences about how much each option was stuck to by the other people. These distinct decision variables were separately encoded in distinct brain areas—the ventromedial prefrontal cortex, posterior superior temporal sulcus/temporoparietal junction, and intraparietal sulcus—and were integrated in the dorsal anterior cingulate cortex. Our findings provide support for a theoretical account in which collective decisions are made through integrating multiple types of inference about oneself, others, and environments, processed in distinct brain modules.

2015年3月27日金曜日

J1ビザ三年目の確定申告(Tax Return)

Jビザは渡米一、二年目と三年目以降で税制上の立場が全く異なります(詳細はググって下さい)。
なので、三年目の確定申告(Tax Return)はかなり複雑になります。
年度の途中で「渡米満二年」を迎えるためです。

で、結論から言うと、自分でできる気がしなかったのでプロに頼みました。
(来年以降は完全に「(税制上)居住者」扱いなので簡単になるのかな…?)

参考までに経過を纏めておきます:
・近所の H&R Block(全米展開している税理士のチェーン)の支店にアポを入れる
・担当税理士に自分の事情(ビザの種類etc)を説明する
・彼女は「外国人対象のややこしい税制に詳しくない」ということで、同じ支店の別の税理士を紹介される
・日を改めてアポをとる
・新しい税理士によって無事に書類が準備できる
という感じでした。

ちなみに料金は、ペナルティを課せられた場合の保険も含めて$260強でした。
安くはないですが、時間と安心を買ったと思うと全然許容範囲だと(ぼくは)思います。


おまけ:

妻はJ2ビザでSSNを持っていないため、ITIN(納税者番号)を取得しに地元のIRSに直接行く必要がありました。
(同時に確定申告の書類も直接提出する。)

これが曲者でした…
このITIN、昔はノータライズされた「パスポートのコピー」で取得できたらしいですが、現在ではパスポートの原本を郵送するか、IRSに直接出向く必要があるみたいです(←たぶん)。

それで、朝8:15に地元のIRSに行ったのですが、すでに長蛇の列(受付開始は8:30)。
結局、五時間以上待って、ITINが無事に取得出来たのは13:30でした…
確定申告に伴う手続きでこれが一番しんどかったです。
というか、先進国の役所で五時間待ちなんて状況があることに驚きました。

Dopamine Modulates Egalitarian Behavior in Humans

Ignacio Sáez, Lusha Zhu, Eric Set, Andrew Kayser, Ming Hsu
Current Biology, in press

薬理的手法によってドーパミンを増強すると、ヒトの平等性指向が増加する(利他指向などは変化しない)。

Egalitarian motives form a powerful force in promoting prosocial behavior and enabling large-scale cooperation in the human species [1]. At the neural level, there is substantial, albeit correlational, evidence suggesting a link between dopamine and such behavior [2 and 3]. However, important questions remain about the specific role of dopamine in setting or modulating behavioral sensitivity to prosocial concerns. Here, using a combination of pharmacological tools and economic games, we provide critical evidence for a causal involvement of dopamine in human egalitarian tendencies. Specifically, using the brain penetrant catechol-O-methyl transferase (COMT) inhibitor tolcapone [4 and 5], we investigated the causal relationship between dopaminergic mechanisms and two prosocial concerns at the core of a number of widely used economic games: (1) the extent to which individuals directly value the material payoffs of others, i.e., generosity, and (2) the extent to which they are averse to differences between their own payoffs and those of others, i.e., inequity. We found that dopaminergic augmentation via COMT inhibition increased egalitarian tendencies in participants who played an extended version of the dictator game [6]. Strikingly, computational modeling of choice behavior [7] revealed that tolcapone exerted selective effects on inequity aversion, and not on other computational components such as the extent to which individuals directly value the material payoffs of others. Together, these data shed light on the causal relationship between neurochemical systems and human prosocial behavior and have potential implications for our understanding of the complex array of social impairments accompanying neuropsychiatric disorders involving dopaminergic dysregulation.

2015年3月26日木曜日

Characterizing the Associative Content of Brain Structures Involved in Habitual and Goal-Directed Actions in Humans: A Multivariate fMRI Study

Daniel McNamee, Mimi Liljeholm, Ondrej Zika, and John P. O'Doherty
The Journal of Neuroscience, 4 March 2015, 35(9):3764-3771;
doi:10.1523/JNEUROSCI.4677-14.2015

刺激に対する脳活動から「行動」の情報を取り出せるのか?
刺激に対する脳活動から「報酬」の情報を取り出せるのか?

Habitual行動に関係すると考えれている背外側線条体では前者のみ可能。
目的指向行動に関係すると考えれている脳部位では両者ともに可能。

While there is accumulating evidence for the existence of distinct neural systems supporting goal-directed and habitual action selection in the mammalian brain, much less is known about the nature of the information being processed in these different brain regions. Associative learning theory predicts that brain systems involved in habitual control, such as the dorsolateral striatum, should contain stimulus and response information only, but not outcome information, while regions involved in goal-directed action, such as ventromedial and dorsolateral prefrontal cortex and dorsomedial striatum, should be involved in processing information about outcomes as well as stimuli and responses. To test this prediction, human participants underwent fMRI while engaging in a binary choice task designed to enable the separate identification of these different representations with a multivariate classification analysis approach. Consistent with our predictions, the dorsolateral striatum contained information about responses but not outcomes at the time of an initial stimulus, while the regions implicated in goal-directed action selection contained information about both responses and outcomes. These findings suggest that differential contributions of these regions to habitual and goal-directed behavioral control may depend in part on basic differences in the type of information that these regions have access to at the time of decision making.

2015年3月25日水曜日

Sensitivity of Locus Ceruleus Neurons to Reward Value for Goal-Directed Actions

Sebastien Bouret and Barry J. Richmond
The Journal of Neuroscience, 4 March 2015, 35(9):4005-4014.

ノルアドレナリン作動性の青斑核ニューロンが報酬に反応するというお話。

The noradrenergic nucleus locus ceruleus (LC) is associated classically with arousal and attention. Recent data suggest that it might also play a role in motivation. To study how LC neuronal responses are related to motivational intensity, we recorded 121 single neurons from two monkeys while reward size (one, two, or four drops) and the manner of obtaining reward (passive vs active) were both manipulated. The monkeys received reward under three conditions: (1) releasing a bar when a visual target changed color; (2) passively holding a bar; or (3) touching and releasing a bar. In the first two conditions, a visual cue indicated the size of the upcoming reward, and, in the third, the reward was constant through each block of 25 trials. Performance levels and lipping intensity (an appetitive behavior) both showed that the monkeys' motivation in the task was related to the predicted reward size. In conditions 1 and 2, LC neurons were activated phasically in relation to cue onset, and this activation strengthened with increasing expected reward size. In conditions 1 and 3, LC neurons were activated before the bar-release action, and the activation weakened with increasing expected reward size but only in task 1. These effects evolved as monkeys progressed through behavioral sessions, because increasing fatigue and satiety presumably progressively decreased the value of the upcoming reward. These data indicate that LC neurons integrate motivationally relevant information: both external cues and internal drives. The LC might provide the impetus to act when the predicted outcome value is low.

2015年3月24日火曜日

A Neural Implementation of Wald’s Sequential Probability Ratio Test

Shinichiro Kira, Tianming Yang, Michael N. Shadlen
Neuron, 85, 861-873 (2015)

信頼度の異なる情報を組み合わせて判断を形成する神経機構。
著者自身による日本語解説は以下:
http://first.lifesciencedb.jp/archives/9864

Difficult decisions often require evaluation of samples of evidence acquired sequentially. A sensible strategy is to accumulate evidence, weighted by its reliability, until sufficient support is attained. An optimal statistical approach would accumulate evidence in units of logarithms of likelihood ratios (logLR) to a desired level. Studies of perceptual decisions suggest that the brain approximates an analogous procedure, but a direct test of accumulation, in units of logLR, to a threshold in units of cumulative logLR is lacking. We trained rhesus monkeys to make decisions based on a sequence of evanescent, visual cues assigned different logLR, hence different reliability. Firing rates of neurons in the lateral intraparietal area (LIP) reflected the accumulation of logLR and reached a stereotyped level before the monkeys committed to a decision. The monkeys’ choices and reaction times, including their variability, were explained by LIP activity in the context of accumulation of logLR to a threshold.

2015年3月23日月曜日

Automatic versus Choice-Dependent Value Representations in the Human Brain

Marcus Grueschow, Rafael Polania, Todd A. Hare, Christian C. Ruff
Neuron, Volume 85, Issue 4, 18 February 2015, Pages 874–885

「意思決定のために計算される価値」は前頭前野内側部(mPFC)、「意思決定の有無に関わらず自動的に計算される価値」は後部帯状皮質(PCC)にそれぞれ保持されている。

The subjective values of choice options can impact on behavior in two fundamentally different types of situations: first, when people explicitly base their actions on such values, and second, when values attract attention despite being irrelevant for current behavior. Here we show with functional magnetic resonance imaging (fMRI) that these two behavioral functions of values are encoded in distinct regions of the human brain. In the medial prefrontal cortex, value-related activity is enhanced when subjective value becomes choice-relevant, and the magnitude of this increase relates directly to the outcome and reliability of the value-based choice. In contrast, activity in the posterior cingulate cortex represents values similarly when they are relevant or irrelevant for the present choice, and the strength of this representation predicts attentional capture by choice-irrelevant values. Our results suggest that distinct components of the brain’s valuation network encode value in context-dependent manners that serve fundamentally different behavioral aims.

2015年3月22日日曜日

Focus on the success of others leads to selfish behavior

Pieter van den Berg, Lucas Molleman, and Franz J. Weissing

PNAS March 3, 2015 vol. 112 no. 9 2912-2917

他者の行動を見て自分の戦略を決める時、「最もうまくいっている人の行動を真似る」タイプと「多くの人に採用されている行動を真似る」タイプがいる。
また、社会的ジレンマ状況では、後者の方が協力行動をとることが多い。

It has often been argued that the spectacular cognitive capacities of humans are the result of selection for the ability to gather, process, and use information about other people. Recent studies show that humans strongly and consistently differ in what type of social information they are interested in. Although some individuals mainly attend to what the majority is doing (frequency-based learning), others focus on the success that their peers achieve with their behavior (success-based learning). Here, we show that such differences in social learning have important consequences for the outcome of social interactions. We report on a decision-making experiment in which individuals were first classified as frequency- and success-based learners and subsequently grouped according to their learning strategy. When confronted with a social dilemma situation, groups of frequency-based learners cooperated considerably more than groups of success-based learners. A detailed analysis of the decision-making process reveals that these differences in cooperation are a direct result of the differences in information use. Our results show that individual differences in social learning strategies are crucial for understanding social behavior.

2015年3月20日金曜日

Observation of Reward Delivery to a Conspecific Modulates Dopamine Release in Ventral Striatum

Vadim Kashtelyan, Nina T. Lichtenberg, Mindy L. Chen, Joseph F. Cheer, Matthew R. Roesch
Current Biology, Volume 24, Issue 21, 3 November 2014, Pages 2564–2568

「他個体が報酬を得ている」ことの観察が、腹側線条体(ventral striatum)におけるドーパミンの放出に影響を与える。

Dopamine (DA) neurons increase and decrease firing for rewards that are better and worse than expected, respectively. These correlates have been observed at the level of single-unit firing and in measurements of phasic DA release in ventral striatum (VS) [1, 2, 3, 4, 5, 6, 7, 8, 9 and 10]. Here, we ask whether DA release is modulated by delivery of reward, not to oneself, but to a conspecific. It is unknown what, if anything, DA release encodes during social situations in which one animal witnesses another animal receive reward. It might be predicted that DA release will increase, suggesting that watching a conspecific receive reward is a favorable outcome. Conversely, DA release may be entirely dependent on personal experience, or perhaps observation of receipt of reward might be experienced as a negative outcome because another individual, rather than oneself, receives the reward. Our data show that animals display a mixture of affective states during observation of conspecific reward, first exhibiting increases in appetitive calls (50 kHz), then exhibiting increases in aversive calls (22 kHz) [11, 12, 13 and 14]. Like ultrasonic vocalizations (USVs), DA signals were modulated by delivery of reward to the conspecific. We show stronger DA release during observation of the conspecific receiving reward relative to observation of reward delivered to an empty box, but only on the first trial. During the following trials, this relationship reversed: DA release was reduced during observation of the conspecific receiving reward. These findings suggest that positive and negative states associated with conspecific reward delivery modulate DA signals related to learning in social situations.

2015年3月19日木曜日

Unconscious information changes decision accuracy but not confidence

Alexandra Vlassova, Chris Donkin, and Joel Pearson
PNAS November 11, 2014 vol. 111 no. 45 16214-16218

「意識に上らない情報」は知覚的意思決定の正確性を向上させるが、確信度には影響を与えない。

The controversial idea that information can be processed and evaluated unconsciously to change behavior has had a particularly impactful history. Here, we extend a simple model of conscious decision-making to explain both conscious and unconscious accumulation of decisional evidence. Using a novel dichoptic suppression paradigm to titrate conscious and unconscious evidence, we show that unconscious information can be accumulated over time and integrated with conscious elements presented either before or after to boost or diminish decision accuracy. The unconscious information could only be used when some conscious decision-relevant information was also present. These data are fit well by a simple diffusion model in which the rate and variability of evidence accumulation is reduced but not eliminated by the removal of conscious awareness. Surprisingly, the unconscious boost in accuracy was not accompanied by corresponding increases in confidence, suggesting that we have poor metacognition for unconscious decisional evidence.

2015年3月18日水曜日

Dynamic routing of task-relevant signals for decision making in dorsolateral prefrontal cortex

Christopher H Donahue & Daeyeol Lee
Nature Neuroscience 18, 295–301 (2015)

前頭前野背外側部(dlPFC)は実験課題に関する様々な変数に反応することが知られているが、「本当に必要な変数」をどうやって識別しているのか?
→ dlPFCは実験課題遂行に必要な変数も必要でない変数もコードしているが、「必要な変数」だけが選択/意思決定と共に反応する。

Neurons in the dorsolateral prefrontal cortex (DLPFC) encode a diverse array of sensory and mnemonic signals, but little is known about how this information is dynamically routed during decision making. We analyzed the neuronal activity in the DLPFC of monkeys performing a probabilistic reversal task where information about the probability and magnitude of reward was provided by the target color and numerical cues, respectively. The location of the target of a given color was randomized across trials and therefore was not relevant for subsequent choices. DLPFC neurons encoded signals related to both task-relevant and irrelevant features, but only task-relevant mnemonic signals were encoded congruently with choice signals. Furthermore, only the task-relevant signals related to previous events were more robustly encoded following rewarded outcomes. Thus, multiple types of neural signals are flexibly routed in the DLPFC so as to favor actions that maximize reward.

2015年3月17日火曜日

A Neural Circuit Covarying with Social Hierarchy in Macaques

MaryAnn P. Noonan, Jerome Sallet, Rogier B. Mars, Franz X. Neubert, Jill X. O'Reilly, Jesper L. Andersson, Anna S. Mitchell, Andrew H. Bell, Karla L. Miller, Matthew F. S. Rushworth
PLoS Biol 12(9): e1001940.

サルの群における、社会的階層の神経基盤。
扁桃体(amygdala)、脳幹(brainstem)、線条体(striatum)といった部位が各個体の社会的階層に関与。

Despite widespread interest in social dominance, little is known of its neural correlates in primates. We hypothesized that social status in primates might be related to individual variation in subcortical brain regions implicated in other aspects of social and emotional behavior in other mammals. To examine this possibility we used magnetic resonance imaging (MRI), which affords the taking of quantitative measurements noninvasively, both of brain structure and of brain function, across many regions simultaneously. We carried out a series of tests of structural and functional MRI (fMRI) data in 25 group-living macaques. First, a deformation-based morphometric (DBM) approach was used to show that gray matter in the amygdala, brainstem in the vicinity of the raphe nucleus, and reticular formation, hypothalamus, and septum/striatum of the left hemisphere was correlated with social status. Second, similar correlations were found in the same areas in the other hemisphere. Third, similar correlations were found in a second data set acquired several months later from a subset of the same animals. Fourth, the strength of coupling between fMRI-measured activity in the same areas was correlated with social status. The network of subcortical areas, however, had no relationship with the sizes of individuals' social networks, suggesting the areas had a simple and direct relationship with social status. By contrast a second circuit in cortex, comprising the midsuperior temporal sulcus and anterior and dorsal prefrontal cortex, covaried with both individuals' social statuses and the social network sizes they experienced. This cortical circuit may be linked to the social cognitive processes that are taxed by life in more complex social networks and that must also be used if an animal is to achieve a high social status.

2015年3月16日月曜日

States of Curiosity Modulate Hippocampus-Dependent Learning via the Dopaminergic Circuit

Matthias J. Gruber, Bernard D. Gelman, Charan Ranganath
Neuron, Volume 84, Issue 2, 22 October 2014, Pages 486–496

自分の興味のある事柄は記憶しやすい。
興味がある時は中脳が活動し、中脳と海馬の機能的結合が記憶に重要な役割を果たす。

People find it easier to learn about topics that interest them, but little is known about the mechanisms by which intrinsic motivational states affect learning. We used functional magnetic resonance imaging to investigate how curiosity (intrinsic motivation to learn) influences memory. In both immediate and one-day-delayed memory tests, participants showed improved memory for information that they were curious about and for incidental material learned during states of high curiosity. Functional magnetic resonance imaging results revealed that activity in the midbrain and the nucleus accumbens was enhanced during states of high curiosity. Importantly, individual variability in curiosity-driven memory benefits for incidental material was supported by anticipatory activity in the midbrain and hippocampus and by functional connectivity between these regions. These findings suggest a link between the mechanisms supporting extrinsic reward motivation and intrinsic curiosity and highlight the importance of stimulating curiosity to create more effective learning experiences.

2015年3月15日日曜日

Neural correlates of strategic reasoning during competitive games

Hyojung Seo, Xinying Cai, Christopher H. Donahue, Daeyeol Lee
Science 17 October 2014:
Vol. 346 no. 6207 pp. 340-343

サル電気生理。
戦略的状況では、個体の行動はしばしば強化学習から逸脱するが、その逸脱には前頭前野背内側部(dmPFC)が関与。

Although human and animal behaviors are largely shaped by reinforcement and punishment, choices in social settings are also influenced by information about the knowledge and experience of other decision-makers. During competitive games, monkeys increased their payoffs by systematically deviating from a simple heuristic learning algorithm and thereby countering the predictable exploitation by their computer opponent. Neurons in the dorsomedial prefrontal cortex (dmPFC) signaled the animal’s recent choice and reward history that reflected the computer’s exploitative strategy. The strength of switching signals in the dmPFC also correlated with the animal’s tendency to deviate from the heuristic learning algorithm. Therefore, the dmPFC might provide control signals for overriding simple heuristic learning algorithms based on the inferred strategies of the opponent.

2015年3月14日土曜日

Orbitofrontal Cortex Uses Distinct Codes for Different Choice Attributes in Decisions Motivated by Curiosity

Tommy C. Blanchard, Benjamin Y. Hayden, Ethan S. Bromberg-Martin
Neuron, Volume 85, Issue 3, 4 February 2015, Pages 602–614

好奇心に基づく意思決定において、眼窩前頭皮質(OFC)のニューロンは「意思決定に重要なそれぞれの変数」をコードしているが、「それらを統合した価値」はコードしていない。
→ OFCは意思決定のプロセスの初期段階に関与しているのでは。

Decision makers are curious and consequently value advance information about future events. We made use of this fact to test competing theories of value representation in area 13 of orbitofrontal cortex (OFC). In a new task, we found that monkeys reliably sacrificed primary reward (water) to view advance information about gamble outcomes. While monkeys integrated information value with primary reward value to make their decisions, OFC neurons had no systematic tendency to integrate these variables, instead encoding them in orthogonal manners. These results suggest that the predominant role of the OFC is to encode variables relevant for learning, attention, and decision making, rather than integrating them into a single scale of value. They also suggest that OFC may be placed at a relatively early stage in the hierarchy of information-seeking decisions, before evaluation is complete. Thus, our results delineate a circuit for information-seeking decisions and suggest a neural basis for curiosity.

2015年3月13日金曜日

Functionally Dissociable Influences on Learning Rate in a Dynamic Environment

Joseph T. McGuire, Matthew R. Nassar, Joshua I. Gold, Joseph W. Kable
Neuron, Volume 84, Issue 4, 19 November 2014, Pages 870–881

最適な学習には学習率を適切に調節することが重要である。
その調節には、「驚きに度合い」、「不確実性」、「報酬」の三つの要素が効いている。
また、上記の要素は、視覚野、前頭前野/頭頂葉、腹側線条体でそれぞれ処理されている。

Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics, and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals.

2015年3月12日木曜日

Learning To Minimize Efforts versus Maximizing Rewards: Computational Principles and Neural Correlates

Vasilisa Skvortsova, Stefano Palminteri, and Mathias Pessiglione
The Journal of Neuroscience, 19 November 2014, 34(47): 15621-15630

「報酬を最大化するための学習」と「手間を最大化する学習」の神経基盤。
前者には前頭前野腹内側部(vmPFC)が関与、後者には前島皮質(anterior insula)、背側前帯状皮質(dorsal ACC)といった部位が関与。

The mechanisms of reward maximization have been extensively studied at both the computational and neural levels. By contrast, little is known about how the brain learns to choose the options that minimize action cost. In principle, the brain could have evolved a general mechanism that applies the same learning rule to the different dimensions of choice options. To test this hypothesis, we scanned healthy human volunteers while they performed a probabilistic instrumental learning task that varied in both the physical effort and the monetary outcome associated with choice options. Behavioral data showed that the same computational rule, using prediction errors to update expectations, could account for both reward maximization and effort minimization. However, these learning-related variables were encoded in partially dissociable brain areas. In line with previous findings, the ventromedial prefrontal cortex was found to positively represent expected and actual rewards, regardless of effort. A separate network, encompassing the anterior insula, the dorsal anterior cingulate, and the posterior parietal cortex, correlated positively with expected and actual efforts. These findings suggest that the same computational rule is applied by distinct brain systems, depending on the choice dimension—cost or benefit—that has to be learned.

2015年3月11日水曜日

Dopamine-associated cached values are not sufficient as the basis for action selection

Nick G. Hollon, Monica M. Arnold, Jerylin O. Gan, Mark E. Walton, and Paul E. M. Phillips
PNAS December 23, 2014 vol. 111 no. 51 18357-18362

側坐核(nucleus accumbens)のドーパミンは、意思決定において「学習された価値(cached values)」の処理に関与しているが、「価値とコストを統合した効用」の処理には関与していない。

Phasic dopamine transmission is posited to act as a critical teaching signal that updates the stored (or “cached”) values assigned to reward-predictive stimuli and actions. It is widely hypothesized that these cached values determine the selection among multiple courses of action, a premise that has provided a foundation for contemporary theories of decision making. In the current work we used fast-scan cyclic voltammetry to probe dopamine-associated cached values from cue-evoked dopamine release in the nucleus accumbens of rats performing cost–benefit decision-making paradigms to evaluate critically the relationship between dopamine-associated cached values and preferences. By manipulating the amount of effort required to obtain rewards of different sizes, we were able to bias rats toward preferring an option yielding a high-value reward in some sessions and toward instead preferring an option yielding a low-value reward in others. Therefore, this approach permitted the investigation of dopamine-associated cached values in a context in which reward magnitude and subjective preference were dissociated. We observed greater cue-evoked mesolimbic dopamine release to options yielding the high-value reward even when rats preferred the option yielding the low-value reward. This result identifies a clear mismatch between the ordinal utility of the available options and the rank ordering of their cached values, thereby providing robust evidence that dopamine-associated cached values cannot be the sole determinant of choices in simple economic decision making.

2015年3月10日火曜日

A category-free neural population supports evolving demands during decision-making

David Raposo, Matthew T Kaufman & Anne K Churchland
Nature Neuroscience 17, 1784–1792 (2014)

後部頭頂皮質はカテゴリー(視覚刺激、嗅覚刺激)に関係なく「知覚的意思決定」に関与している。

The posterior parietal cortex (PPC) receives diverse inputs and is involved in a dizzying array of behaviors. These many behaviors could rely on distinct categories of neurons specialized to represent particular variables or could rely on a single population of PPC neurons that is leveraged in different ways. To distinguish these possibilities, we evaluated rat PPC neurons recorded during multisensory decisions. Newly designed tests revealed that task parameters and temporal response features were distributed randomly across neurons, without evidence of categories. This suggests that PPC neurons constitute a dynamic network that is decoded according to the animal's present needs. To test for an additional signature of a dynamic network, we compared moments when behavioral demands differed: decision and movement. Our new state-space analysis revealed that the network explored different dimensions during decision and movement. These observations suggest that a single network of neurons can support the evolving behavioral demands of decision-making.

2015年3月9日月曜日

Activation of Dorsal Raphe Serotonergic Neurons Promotes Waiting but Is Not Reinforcing

Madalena S. Fonseca, Masayoshi Murakami, Zachary F. Mainen
Current Biology, Volume 25, Issue 3, 2 February 2015, Pages 306–315

背側縫線核(dorsal raphe nucleus)のセロトニン・ニューロンの活動は「報酬が出るまで待つ」ことに重要だが、強化学習には効いていない。

Background
The central neuromodulator serotonin (5-HT) has been implicated in a wide range of behaviors and affective disorders, but the principles underlying its function remain elusive. One influential line of research has implicated 5-HT in response inhibition and impulse control. Another has suggested a role in affective processing. However, whether and how these effects relate to each other is still unclear.

Results
Here, we report that optogenetic activation of 5-HT neurons in the dorsal raphe nucleus (DRN) produces a dose-dependent increase in mice’s ability to withhold premature responding in a task that requires them to wait several seconds for a randomly delayed tone. The 5-HT effect had a rapid onset and was maintained throughout the stimulation period. In addition, movement speed was slowed, but photostimulation did not affect reaction time or time spent at the reward port. Using similar photostimulation protocols in place preference and value-based choice tests, we found no evidence of either appetitive or aversive effects of DRN 5-HT neuron activation.

Conclusions
These results provide strong evidence that the efficacy of DRN 5-HT neurons in promoting waiting for delayed reward is independent of appetitive or aversive effects and support the importance of 5-HT in behavioral persistence and impulse control.

2015年3月8日日曜日

Neural Mechanisms Underlying Contextual Dependency of Subjective Values: Converging Evidence from Monkeys and Humans

Raphaëlle Abitbol, Maël Lebreton, Guillaume Hollard, Barry J. Richmond, Sébastien Bouret, and Mathias Pessiglione
The Journal of Neuroscience, 4 February 2015, 35(5): 2308-2320

前頭前野腹内側部(vmPFC)における主観的価値の表象をヒトとサルで比較。

A major challenge for decision theory is to account for the instability of expressed preferences across time and context. Such variability could arise from specific properties of the brain system used to assign subjective values. Growing evidence has identified the ventromedial prefrontal cortex (VMPFC) as a key node of the human brain valuation system. Here, we first replicate this observation with an fMRI study in humans showing that subjective values of painting pictures, as expressed in explicit pleasantness ratings, are specifically encoded in the VMPFC. We then establish a bridge with monkey electrophysiology, by comparing single-unit activity evoked by visual cues between the VMPFC and the orbitofrontal cortex. At the neural population level, expected reward magnitude was only encoded in the VMPFC, which also reflected subjective cue values, as expressed in Pavlovian appetitive responses. In addition, we demonstrate in both species that the additive effect of prestimulus activity on evoked activity has a significant impact on subjective values. In monkeys, the factor dominating prestimulus VMPFC activity was trial number, which likely indexed variations in internal dispositions related to fatigue or satiety. In humans, prestimulus VMPFC activity was externally manipulated through changes in the musical context, which induced a systematic bias in subjective values. Thus, the apparent stochasticity of preferences might relate to the VMPFC automatically aggregating the values of contextual features, which would bias subsequent valuation because of temporal autocorrelation in neural activity.

2015年3月7日土曜日

Interactions between Dorsolateral and Ventromedial Prefrontal Cortex Underlie Context-Dependent Stimulus Valuation in Goal-Directed Choice

Sarah Rudorf and Todd A. Hare
The Journal of Neuroscience, 26 November 2014, 34(48): 15988-15996

「価値」を処理する前頭前野腹内側部(vmPFC)と「状況(コンテクスト)」を処理する前頭前野背外側部(dlPFC)の相互作用が意思決定には重要。

External circumstances and internal bodily states often change and require organisms to flexibly adapt valuation processes to select the optimal action in a given context. Here, we investigate the neurobiology of context-dependent valuation in 22 human subjects using functional magnetic resonance imaging. Subjects made binary choices between visual stimuli with three attributes (shape, color, and pattern) that were associated with monetary values. Context changes required subjects to deviate from the default shape valuation and to integrate a second attribute to comply with the goal to maximize rewards. Critically, this binary choice task did not involve any conflict between opposing monetary, temporal, or social preferences. We tested the hypothesis that interactions between regions of dorsolateral prefrontal cortex (dlPFC) and ventromedial prefrontal cortex (vmPFC) implicated in self-control choices would also underlie the more general function of context-dependent valuation. Consistent with this idea, we found that the degree to which stimulus attributes were reflected in vmPFC activity varied as a function of context. In addition, activity in dlPFC increased when context changes required a reweighting of stimulus attribute values. Moreover, the strength of the functional connectivity between dlPFC and vmPFC was associated with the degree of context-specific attribute valuation in vmPFC at the time of choice. Our findings suggest that functional interactions between dlPFC and vmPFC are a key aspect of context-dependent valuation and that the role of this network during choices that require self-control to adjudicate between competing outcome preferences is a specific application of this more general neural mechanism.

2015年3月6日金曜日

Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

Chih-Chung Ting, Chia-Chen Yu, Laurence T. Maloney and Shih-Wei Wu
The Journal of Neuroscience, 28 January 2015, 35(4): 1792-1805

ベイズ推定の脳計算基盤。
前頭前野内側部(mPFC)が、事前情報及び尤度をコードしている。
また、それらを統合した事後分布もmPFCにコードされている。

In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information.

2015年3月5日木曜日

Orbitofrontal Cortex Is Required for Optimal Waiting Based on Decision Confidence

Armin Lak, Gil M. Costa, Erin Romberg, Alexei A. Koulakov, Zachary F. Mainen, Adam Kepecs
Neuron
Volume 84, Issue 1, 1 October 2014, Pages 190–201

眼窩前頭皮質はメタ認知、特に「自分の知覚的意思決定の確信度」の検知に関連する。

Confidence judgments are a central example of metacognition—knowledge about one’s own cognitive processes. According to this metacognitive view, confidence reports are generated by a second-order monitoring process based on the quality of internal representations about beliefs. Although neural correlates of decision confidence have been recently identified in humans and other animals, it is not well understood whether there are brain areas specifically important for confidence monitoring. To address this issue, we designed a postdecision temporal wagering task in which rats expressed choice confidence by the amount of time they were willing to wait for reward. We found that orbitofrontal cortex inactivation disrupts waiting-based confidence reports without affecting decision accuracy. Furthermore, we show that a normative model can quantitatively account for waiting times based on the computation of decision confidence. These results establish an anatomical locus for a metacognitive report, confidence judgment, distinct from the processes required for perceptual decisions.

2015年3月4日水曜日

Representation of aversive prediction errors in the human periaqueductal gray

Mathieu Roy, Daphna Shohamy, Nathaniel Daw, Marieke Jepma, G Elliott Wimmer & Tor D Wager
Nature Neuroscience 17, 1607–1612 (2014)

ヒトfMRI。
痛みに関する予測誤差は periaqueductal(中脳水道周囲灰白質?)で処理されている。

Pain is a primary driver of learning and motivated action. It is also a target of learning, as nociceptive brain responses are shaped by learning processes. We combined an instrumental pain avoidance task with an axiomatic approach to assessing fMRI signals related to prediction errors (PEs), which drive reinforcement-based learning. We found that pain PEs were encoded in the periaqueductal gray (PAG), a structure important for pain control and learning in animal models. Axiomatic tests combined with dynamic causal modeling suggested that ventromedial prefrontal cortex, supported by putamen, provides an expected value–related input to the PAG, which then conveys PE signals to prefrontal regions important for behavioral regulation, including orbitofrontal, anterior mid-cingulate and dorsomedial prefrontal cortices. Thus, pain-related learning involves distinct neural circuitry, with implications for behavior and pain dynamics.

2015年3月3日火曜日

Neural Correlates of Expected Risks and Returns in Risky Choice across Development

Anna C.K. van Duijvenvoorde, Hilde M. Huizenga, Leah H. Somerville, Mauricio R. Delgado, Alisa Powers, Wouter D. Weeda, B.J. Casey, Elke U. Weber, and Bernd Figner
The Journal of Neuroscience, 28 January 2015, 35(4): 1549-1560

子ども、思春期、大人の被験者を対象にリスク下の意思決定とその神経基盤を調べた研究。
大人は一貫してリスク回避的だが、思春期は個人差が大きい。

Adolescence is often described as a period of increased risk taking relative to both childhood and adulthood. This inflection in risky choice behavior has been attributed to a neurobiological imbalance between earlier developing motivational systems and later developing top-down control regions. Yet few studies have decomposed risky choice to investigate the underlying mechanisms or tracked their differential developmental trajectory. The current study uses a risk–return decomposition to more precisely assess the development of processes underlying risky choice and to link them more directly to specific neural mechanisms. This decomposition specifies the influence of changing risks (outcome variability) and changing returns (expected value) on the choices of children, adolescents, and adults in a dynamic risky choice task, the Columbia Card Task. Behaviorally, risk aversion increased across age groups, with adults uniformly risk averse and adolescents showing substantial individual differences in risk sensitivity, ranging from risk seeking to risk averse. Neurally, we observed an adolescent peak in risk-related activation in the anterior insula and dorsal medial PFC. Return sensitivity, on the other hand, increased monotonically across age groups and was associated with increased activation in the ventral medial PFC and posterior cingulate cortex with age. Our results implicate adolescence as a developmental phase of increased neural risk sensitivity. Importantly, this work shows that using a behaviorally validated decision-making framework allows a precise operationalization of key constructs underlying risky choice that inform the interpretation of results.

2015年3月2日月曜日

Dopamine Reward Prediction Error Responses Reflect Marginal Utility

William R. Stauffer, Armin Lak, Wolfram Schultz
Current Biology, Volume 24, Issue 21, p2491–2500, 3 November 2014

ドーパミン神経細胞でコードされている「報酬予測誤差」信号は、客観的な報酬量ではなく、主観的な効用を反映している。

Optimal choices require an accurate neuronal representation of economic value. In economics, utility functions are mathematical representations of subjective value that can be constructed from choices under risk. Utility usually exhibits a nonlinear relationship to physical reward value that corresponds to risk attitudes and reflects the increasing or decreasing marginal utility obtained with each additional unit of reward. Accordingly, neuronal reward responses coding utility should robustly reflect this nonlinearity.

In two monkeys, we measured utility as a function of physical reward value from meaningful choices under risk (that adhered to first- and second-order stochastic dominance). The resulting nonlinear utility functions predicted the certainty equivalents for new gambles, indicating that the functions’ shapes were meaningful. The monkeys were risk seeking (convex utility function) for low reward and risk avoiding (concave utility function) with higher amounts. Critically, the dopamine prediction error responses at the time of reward itself reflected the nonlinear utility functions measured at the time of choices. In particular, the reward response magnitude depended on the first derivative of the utility function and thus reflected the marginal utility. Furthermore, dopamine responses recorded outside of the task reflected the marginal utility of unpredicted reward. Accordingly, these responses were sufficient to train reinforcement learning models to predict the behaviorally defined expected utility of gambles.

These data suggest a neuronal manifestation of marginal utility in dopamine neurons and indicate a common neuronal basis for fundamental explanatory constructs in animal learning theory (prediction error) and economic decision theory (marginal utility).

2015年3月1日日曜日

Neuronal Prediction of Opponent’s Behavior during Cooperative Social Interchange in Primates

Keren Haroush, Ziv M. Williams
Cell
Available online 26 February 2015

サルで繰り返し囚人のジレンマ+電気生理。
サルも「しっぺ返し戦略」を採用。
dACCの別々の神経細胞群が「自己の協力行動」と「相手の行動の予測」をコード。
電気刺激でdACCの活動を阻害すると、「相手に協力してもらった後の協力」の頻度のみが減少。

A cornerstone of successful social interchange is the ability to anticipate each other’s intentions or actions. While generating these internal predictions is essential for constructive social behavior, their single neuronal basis and causal underpinnings are unknown. Here, we discover specific neurons in the primate dorsal anterior cingulate that selectively predict an opponent’s yet unknown decision to invest in their common good or defect and distinct neurons that encode the monkey’s own current decision based on prior outcomes. Mixed population predictions of the other was remarkably near optimal compared to behavioral decoders. Moreover, disrupting cingulate activity selectively biased mutually beneficial interactions between the monkeys but, surprisingly, had no influence on their decisions when no net-positive outcome was possible. These findings identify a group of other-predictive neurons in the primate anterior cingulate essential for enacting cooperative interactions and may pave a way toward the targeted treatment of social behavioral disorders.

2015年2月28日土曜日

A Common Neural Code for Perceived and Inferred Emotion

Amy E. Skerry and Rebecca Saxe
The Journal of Neuroscience, 26 November 2014, 34(48): 15997-16008

fMRI MVPA。
「他者の情動」の「知覚」と「推測」は前頭前野内側部で「同じ様式」でコードされている。
つまり、「他者情動の知覚」を復元できるようトレーニングした分類器で「他者情動の推測」を復元できる(逆もOK)。

Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion.

2015年2月27日金曜日

Planning activity for internally generated reward goals in monkey amygdala neurons

István Hernádi, Fabian Grabenhorst & Wolfram Schultz
Nature Neuroscience 18, 461–469 (2015)

扁桃体は、報酬に基づく意思決定の「計画」をコードしている。
本当に「計画している」のか、それとも単に「逐一、意思決定をしている」だけなのかを区別するために色々と解析で工夫しているのが面白い。

The best rewards are often distant and can only be achieved by planning and decision-making over several steps. We designed a multi-step choice task in which monkeys followed internal plans to save rewards toward self-defined goals. During this self-controlled behavior, amygdala neurons showed future-oriented activity that reflected the animal's plan to obtain specific rewards several trials ahead. This prospective activity encoded crucial components of the animal's plan, including value and length of the planned choice sequence. It began on initial trials when a plan would be formed, reappeared step by step until reward receipt, and readily updated with a new sequence. It predicted performance, including errors, and typically disappeared during instructed behavior. Such prospective activity could underlie the formation and pursuit of internal plans characteristic of goal-directed behavior. The existence of neuronal planning activity in the amygdala suggests that this structure is important in guiding behavior toward internally generated, distant goals.