2012年12月30日日曜日

2012年も終わり

今年も終わろうとしていますね。
お世話になった皆様、ありがとうございました。
そして一年間、お疲れ様でした。

ぼくは一足お先に21日で仕事納めにして、クリスマス休暇を満喫していました。
ラスベガスとか行ってしまいました。
なんか、すみません…(笑)

さて、今年ですが、色々と激動の一年でした。

研究面では、理研で4年間かけて行ってきた研究がNeuron誌から出版されました。
詳しい内容/出版までの経緯は
http://szkshnsk.blogspot.com/2012/06/neuron.html
にまとめたのですが、やはりとっても嬉しい出来事でした。

同業者の皆様へ、
よろしければ、ぜひ引用してやって下さい↓
"Learning to simulate others' decisions"
Shinsuke Suzuki, Norihiro Harasawa, Kenichi Ueno, Justin L Gardner, Noritaka Ichinohe, Masahiko Haruno, Kang Cheng, Hiroyuki Nakahara.
Neuron, Vol. 74, No. 6, pp. 1125-1137, 2012.
http://www.cell.com/neuron/abstract/S0896-6273(12)00427-8

また、論文出版後の10月から、研究の場をカリフォルニア工科大学に移しました。
異動/引越のゴタゴタでなかなか研究は(今のところ)思うようには進んでいないのですが、これから新しい環境で楽しく面白い研究をじゃんじゃんやっていきたいなと思っています。

また、プライベートでは結婚もしました。
渡米直前の入籍で、通称「ビザ婚」というらしいです:笑。
(入籍していないと配偶者ビザの取得が難しいので)

2012年は総合的に見てとても良い一年だったので、来年末も同じように「良い一年だった」と言えるよう、新年一月二日からがんばっていきたいと思います。
元旦は休むけどな!

それでは。
皆様、よいお年を!

2012年12月26日水曜日

Neuronal integration in visual cortex elevates face category tuning to conscious face perception


Johannes J. Fahrenfort, Tineke M. Snijders, Klaartje Heinen, Simon van Gaal, H. Steven Scholte, and Victor A. F. Lamme
PNAS 2012 109 (52) 21504-21509

The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning.

2012年12月20日木曜日

HOUSTON'S@PASADENAでアメリカン・ステーキ


昨夜のディナーはステーキ。
以前から美味しいと聞いていた、こちらでは有名なレストランチェーン。
http://www.hillstone.com/#/restaurants/houstons/
予約なしではとても入れないお店です。
我々も五日前に予約してこの日を迎えました(大袈裟)。

お店に着くと駐車場はほぼ満杯。
お店の外、中、ウェイティングバーに人が溢れていました。
予約していて本当に良かったです。


ステーキの見た目と量はアメリカンだけど、お肉も柔らかくて焼き加減もグッド。
とっても美味しかったです。
「アメリカ人、やればできるやん!」っていう感じでした(笑)。
まあ、それなりの値段はしますけどね(笑)。
ただ、量が…やっぱり大き過ぎる…

そして最後に、感動したのが「レセプション(受付)」の人の仕事ぶり。
大量に押し寄せるお客さん・予約の電話を捌きながら、予約客一人一人の名前を覚えたうえで待っている場所(店内、店外、 or ウェイティングバー)を把握し、時間になったら席に案内。
帰り際にも名前を覚えていて「〇〇様、またお会いできるのを楽しみにしています」と挨拶。
まさにプロの仕事でした。
まあ、それなりの値段はしますけどね(←しつこい:笑)。

ということで、半年に一回くらいはがっつり肉を食べに行きたいお店でした。
かしこ。

2012年12月19日水曜日

Prosocial preferences do not explain human cooperation in public-goods games


Maxwell N. Burton-Chellew and Stuart A. West
PNAS December 17, 2012 201210960

公共財ゲームと「同じ状況で他者の存在を知らせない(投資をし報酬を得る:報酬は公共財ゲームと同様に他者と自分の投資額で決まるが、被験者はそのことを知らない)ケース」を比較。協力率に差なし。公共財ゲームでの協力は社会的効用では説明できない。

It has become an accepted paradigm that humans have “prosocial preferences” that lead to higher levels of cooperation than those that would maximize their personal financial gain. However, the existence of prosocial preferences has been inferred post hoc from the results of economic games, rather than with direct experimental tests. Here, we test how behavior in a public-goods game is influenced by knowledge of the consequences of actions for other players. We found that (i) individuals cooperate at similar levels, even when they are not informed that their behavior benefits others; (ii) an increased awareness of how cooperation benefits others leads to a reduction, rather than an increase, in the level of cooperation; and (iii) cooperation can be either lower or higher than expected, depending on experimental design. Overall, these results contradict the suggested role of the prosocial preferences hypothesis and show how the complexity of human behavior can lead to misleading conclusions from controlled laboratory experiments.

2012年12月18日火曜日

Neural and behavioral bases of age differences in perceptions of trust


Elizabeth Castle, Naomi I. Eisenberger, Teresa E. Seeman, Wesley G. Moons, Ian A. Boggero, Mark S. Grinblatt, and Shelley E. Taylor
PNAS December 18, 2012 vol. 109 no. 51 20848-20852

高齢者は、若者に比べ、(他者の顔を見て「信頼できるか」を評定する課題で)他者を信頼しやすい。また、他者の顔を見ている際の(リスク知覚に関連すると考えられている)前島皮質の活動も低い。

Older adults are disproportionately vulnerable to fraud, and federal agencies have speculated that excessive trust explains their greater vulnerability. Two studies, one behavioral and one using neuroimaging methodology, identified age differences in trust and their neural underpinnings. Older and younger adults rated faces high in trust cues similarly, but older adults perceived faces with cues to untrustworthiness to be significantly more trustworthy and approachable than younger adults. This age-related pattern was mirrored in neural activation to cues of trustworthiness. Whereas younger adults showed greater anterior insula activation to untrustworthy versus trustworthy faces, older adults showed muted activation of the anterior insula to untrustworthy faces. The insula has been shown to support interoceptive awareness that forms the basis of “gut feelings,” which represent expected risk and predict risk-avoidant behavior. Thus, a diminished “gut” response to cues of untrustworthiness may partially underlie older adults’ vulnerability to fraud.

2012年12月6日木曜日

The Interaction of Bayesian Priors and Sensory Data and Its Neural Circuit Implementation in Visually Guided Movement


Jin Yang, Joonyeol Lee, and Stephen G. Lisberger
J. Neurosci. 2012;32 17632-17645
http://www.jneurosci.org/cgi/content/abstract/32/49/17632?etoc

Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days' history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed covary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction-specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields.

Critical Roles for Anterior Insula and Dorsal Striatum in Punishment-Based Avoidance Learning


Stefano Palminteri, Damian Justo, Céline Jauffret, Beth Pavlicek, Aurélie Dauta, Christine Delmaire, Virginie Czernecki, Carine Karachi, Laurent Capelle, Alexandra Durr, Mathias Pessiglione
Neuron, Volume 76, Issue 5, 6 December 2012, Pages 998–1009

前島皮質または背側線条体に障害のある患者は損失回避学習が(報酬獲得学習に比べて)うまくできない。強化学習モデルを行動にフィットした結果、「健常者に比べて、前者の患者は損失に鈍感であり、後者の患者は行動選択の精度が低い」ことが分かった。

The division of human learning systems into reward and punishment opponent modules is still a debated issue. While the implication of ventral prefrontostriatal circuits in reward-based learning is well established, the neural underpinnings of punishment-based learning remain unclear. To elucidate the causal implication of brain regions that were related to punishment learning in a previous functional neuroimaging study, we tested the effects of brain damage on behavioral performance, using the same task contrasting monetary gains and losses. Cortical and subcortical candidate regions, the anterior insula and dorsal striatum, were assessed in patients presenting brain tumor and Huntington disease, respectively. Both groups exhibited selective impairment of punishment-based learning. Computational modeling suggested complementary roles for these structures: the anterior insula might be involved in learning the negative value of loss-predicting cues, whereas the dorsal striatum might be involved in choosing between those cues so as to avoid the worst.

2012年11月29日木曜日

Rejection of unfair offers in the ultimatum game is no evidence of strong reciprocity


Toshio Yamagishi, Yutaka Horita, Nobuhiro Mifune, Hirofumi Hashimoto, Yang Li, Mizuho Shinada, Arisa Miura, Keigo Inukai, Haruto Takagishi, and Dora Simunovic
Published online before print November 27, 2012, doi: 10.1073/pnas.1212126109
PNAS November 27, 2012 201212126

最後通牒ゲームでの不公平提案拒否は「強い互恵性(不公平な提案に罰を与え、公平な提案には報いる)」の証拠にはならない。「信頼ゲームでの互恵的行動(投資された金額を返す)」や「SVOのProsociality」等と(個人間で)相関しないから。

「じゃあ、最後通牒ゲームでの不公平提案拒否は何なの?」については今後の課題。著者らは「下の立場に置かれる事を回避するため(←訳が微妙…)」と主張(推測)しているが、このデータだけでは何とも言えない気がする…たぶん別の実験が必要。

The strong reciprocity model of the evolution of human cooperation has gained some acceptance, partly on the basis of support from experimental findings. The observation that unfair offers in the ultimatum game are frequently rejected constitutes an important piece of the experimental evidence for strong reciprocity. In the present study, we have challenged the idea that the rejection response in the ultimatum game provides evidence of the assumption held by strong reciprocity theorists that negative reciprocity observed in the ultimatum game is inseparably related to positive reciprocity as the two sides of a preference for fairness. The prediction of an inseparable relationship between positive and negative reciprocity was rejected on the basis of the results of a series of experiments that we conducted using the ultimatum game, the dictator game, the trust game, and the prisoner’s dilemma game. We did not find any correlation between the participants’ tendencies to reject unfair offers in the ultimatum game and their tendencies to exhibit various prosocial behaviors in the other games, including their inclinations to positively reciprocate in the trust game. The participants’ responses to postexperimental questions add support to the view that the rejection of unfair offers in the ultimatum game is a tacit strategy for avoiding the imposition of an inferior status.

Developing Intuition: Neural Correlates of Cognitive-Skill Learning in Caudate Nucleus


Xiaohong Wan, Daisuke Takano, Takeshi Asamizuya, Chisato Suzuki, Kenichi Ueno, Kang Cheng, Takeshi Ito, and Keiji Tanaka
J. Neurosci. 2012;32 17492-17501 Open Access

将棋プロジェクト。「プロ棋士は(アマに比べ)直観的に意思決定を行い、その際は尾状核が活動する」が、初心者が将棋のトレーニング(15週間)をすると?尾状核の活動が増加する(他の皮質活動は変化しない)。尾状核は直観的な意思決定に重要。 http://www.jneurosci.org/content/32/48/17492

正直言うと、前回のScience (Wang et al., 2011)より今回の方が良い研究だと思うけど…色々と難しい…

The superior capability of cognitive experts largely depends on automatic, quick information processing, which is often referred to as intuition. Intuition develops following extensive long-term training. There are many cognitive models on intuition development, but its neural basis is not known. Here we trained novices for 15 weeks to learn a simple board game and measured their brain activities in early and end phases of the training while they quickly generated the best next-move to a given board pattern. We found that the activation in the head of caudate nucleus developed over the course of training, in parallel to the development of the capability to quickly generate the best next-move, and the magnitude of the caudate activity was correlated with the subject's performance. In contrast, cortical activations, which already appeared in the early phase of training, did not further change. Thus, neural activation in the caudate head, but not those in cortical areas, tracked the development of capability to quickly generate the best next-move, indicating that circuitries including the caudate head may automate cognitive computations.

2012年11月28日水曜日

Stimulus-Related Activity during Conditional Associations in Monkey Perirhinal Cortex Neurons Depends on Upcoming Reward Outcome


Kaoru Ohyama, Yasuko Sugase-Miyamoto, Narihisa Matsumoto, Munetaka Shidara, and Chikara Sato
J. Neurosci. 2012;32 17407-17419
http://www.jneurosci.org/cgi/content/abstract/32/48/17407?etoc

Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

Distributed Representations of Rule Identity and Rule Order in Human Frontal Cortex and Striatum


Carlo Reverberi, Kai Gorgen, and John-Dylan Haynes
J. Neurosci. 2012;32 17420-17430
http://www.jneurosci.org/cgi/content/abstract/32/48/17420?etoc

Humans are able to flexibly devise and implement rules to reach their desired goals. For simple situations, we can use single rules, such as “if traffic light is green then cross the street.” In most cases, however, more complex rule sets are required, involving the integration of multiple layers of control. Although it has been shown that prefrontal cortex is important for rule representation, it has remained unclear how the brain encodes more complex rule sets. Here, we investigate how the brain represents the order in which different parts of a rule set are evaluated. Participants had to follow compound rule sets that involved the concurrent application of two single rules in a specific order, where one of the rules always had to be evaluated first. The rules and their assigned order were independently manipulated. By applying multivariate decoding to fMRI data, we found that the identity of the current rule was encoded in a frontostriatal network involving right ventrolateral prefrontal cortex, right superior frontal gyrus, and dorsal striatum. In contrast, rule order could be decoded in the dorsal striatum and in the right premotor cortex. The nonhomogeneous distribution of information across brain areas was confirmed by follow-up analyses focused on relevant regions of interest. We argue that the brain encodes complex rule sets by “decomposing” them in their constituent features, which are represented in different brain areas, according to the aspect of information to be maintained.

2012年11月22日木曜日

Speaker–listener neural coupling underlies successful communication


Greg J. Stephens, Lauren J. Silbert, and Uri Hasson
PNAS August 10, 2010 vol. 107 no. 32 14425-14430

話し手と聞き手の脳(fMRI)活動は、聴覚野、頭頂葉、側頭葉、前頭葉など様々な部位で同期する(多くの場合、聞き手の活動が遅れる)。また、同期の強さから、両者のコミュニケーションの成功/失敗を予測できる。なお、話し手と聞き手が異なる言語を使っていてコミュニケーションが取れない場合は同期は起こらない(聴覚野のみで同期)。

Verbal communication is a joint activity; however, speech production and comprehension have primarily been analyzed as independent processes within the boundaries of individual brains. Here, we applied fMRI to record brain activity from both speakers and listeners during natural verbal communication. We used the speaker's spatiotemporal brain activity to model listeners’ brain activity and found that the speaker's activity is spatially and temporally coupled with the listener's activity. This coupling vanishes when participants fail to communicate. Moreover, though on average the listener's brain activity mirrors the speaker's activity with a delay, we also find areas that exhibit predictive anticipatory responses. We connected the extent of neural coupling to a quantitative measure of story comprehension and find that the greater the anticipatory speaker–listener coupling, the greater the understanding. We argue that the observed alignment of production- and comprehension-based processes serves as a mechanism by which brains convey information.

2012年11月21日水曜日

Orbitofrontal Cortex Supports Behavior and Learning Using Inferred But Not Cached Values


Joshua L. Jones, Guillem R. Esber, Michael A. McDannald, Aaron J. Gruber, Alex Hernandez, Aaron Mirenzi, Geoffrey Schoenbaum
Science 16 November 2012: Vol. 338 no. 6109 pp. 953-956

OFCは「自身の報酬経験から直接的に学習された価値(model-free、cached)」ではなく「推論に基づく価値(model-based、inferred)」の処理に関わっている。

「刺激A→B、C→D」を学習後、「B→報酬、D→無報酬」を学習。OFCの活動を抑制されたラットは「B、D対呈示」ではBに反応するが、「A、C対呈示」ではどちらにも反応せず(健常なラットはAに反応)。OFCは「推論に基づく価値」処理に関係。 http://www.sciencemag.org/content/338/6109/953

Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling “economic” value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.

The Primate Ventral Pallidum Encodes Expected Reward Value and Regulates Motor Action


Yoshihisa Tachibana, Okihide Hikosaka
Neuron, Volume 76, Issue 4, 826-837, 21 November 2012

Motor actions are facilitated when expected reward value is high. It is hypothesized that there are neurons that encode expected reward values to modulate impending actions and potentially represent motivation signals. Here, we present evidence suggesting that the ventral pallidum (VP) may participate in this process. We recorded single neuronal activity in the monkey VP using a saccade task with a direction-dependent reward bias. Depending on the amount of the expected reward, VP neurons increased or decreased their activity tonically until the reward was delivered, for both ipsiversive and contraversive saccades. Changes in expected reward values were also associated with changes in saccade performance (latency and velocity). Furthermore, bilateral muscimol-induced inactivation of the VP abolished the reward-dependent changes in saccade latencies. These data suggest that the VP provides expected reward value signals that are used to facilitate or inhibit motor actions.

NMDA Receptors Control Cue-Outcome Selectivity and Plasticity of Orbitofrontal Firing Patterns during Associative Stimulus-Reward Learning


Marijn van Wingerden, Martin Vinck, Vincent Tijms, Irene R.S. Ferreira, Allert J. Jonker, Cyriel M.A. Pennartz
Neuron, Volume 76, Issue 4, 813-825, 21 November 2012

Neural activity in orbitofrontal cortex has been linked to flexible representations of stimulus-outcome associations. Such value representations are known to emerge with learning, but the neural mechanisms supporting this phenomenon are not well understood. Here, we provide evidence for a causal role for NMDA receptors (NMDARs) in mediating spike pattern discriminability, neural plasticity, and rhythmic synchronization in relation to evaluative stimulus processing and decision making. Using tetrodes, single-unit spike trains and local field potentials were recorded during local, unilateral perfusion of an NMDAR blocker in rat OFC. In the absence of behavioral effects, NMDAR blockade severely hampered outcome-selective spike pattern formation to olfactory cues, relative to control perfusions. Moreover, NMDAR blockade shifted local rhythmic synchronization to higher frequencies and degraded its linkage to stimulus-outcome selective coding. These results demonstrate the importance of NMDARs for cue-outcome associative coding in OFC during learning and illustrate how NMDAR blockade disrupts network dynamics.

Rhythmic Fluctuations in Evidence Accumulation during Decision Making in the Human Brain


Valentin Wyart, Vincent de Gardelle, Jacqueline Scholl, Christopher Summerfield
Neuron, Volume 76, Issue 4, 847-858, 21 November 2012

Categorical choices are preceded by the accumulation of sensory evidence in favor of one action or another. Current models describe evidence accumulation as a continuous process occurring at a constant rate, but this view is inconsistent with accounts of a psychological refractory period during sequential information processing. During multisample perceptual categorization, we found that the neural encoding of momentary evidence in human electrical brain signals and its subsequent impact on choice fluctuated rhythmically according to the phase of ongoing parietal delta oscillations (1–3 Hz). By contrast, lateralized beta-band power (10–30 Hz) overlying human motor cortex encoded the integrated evidence as a response preparation signal. These findings draw a clear distinction between central and motor stages of perceptual decision making, with successive samples of sensory evidence competing to pass through a serial processing bottleneck before being mapped onto action.

Positively Biased Processing of Self-Relevant Social Feedback


Christoph W. Korn, Kristin Prehn, Soyoung Q. Park, Henrik Walter, and Hauke
R. Heekeren
J. Neurosci. 2012;32 16832-16844
http://www.jneurosci.org/cgi/content/abstract/32/47/16832?etoc

Receiving social feedback such as praise or blame for one's character traits is a key component of everyday human interactions. It has been proposed that humans are positively biased when integrating social feedback into their self-concept. However, a mechanistic description of how humans process self-relevant feedback is lacking. Here, participants received feedback from peers after a real-life interaction. Participants processed feedback in a positively biased way, i.e., they changed their self-evaluations more toward desirable than toward undesirable feedback. Using functional magnetic resonance imaging we investigated two feedback components. First, the reward-related component correlated with activity in ventral striatum and in anterior cingulate cortex/medial prefrontal cortex (ACC/MPFC). Second, the comparison-related component correlated with activity in the mentalizing network, including the MPFC, the temporoparietal junction, the superior temporal sulcus, the temporal pole, and the inferior frontal gyrus. This comparison-related activity within the mentalizing system has a parsimonious interpretation, i.e., activity correlated with the differences between participants' own evaluation and feedback. Importantly, activity within the MPFC that integrated reward-related and comparison-related components predicted the self-related positive updating bias across participants offering a mechanistic account of positively biased feedback processing. Thus, theories on both reward and mentalizing are important for a better understanding of how social information is integrated into the human self-concept.

Perceptual Criteria in the Human Brain


Corey N. White, Jeanette A. Mumford, and Russell A. Poldrack
J. Neurosci. 2012;32 16716-16724
http://www.jneurosci.org/cgi/content/abstract/32/47/16716?etoc

A critical component of decision making is the ability to adjust criteria for classifying stimuli. fMRI and drift diffusion models were used to explore the neural representations of perceptual criteria in decision making. The specific focus was on the relative engagement of perceptual- and decision-related neural systems in response to adjustments in perceptual criteria. Human participants classified visual stimuli as big or small based on criteria of different sizes, which effectively biased their choices toward one response over the other. A drift diffusion model was fit to the behavioral data to extract estimates of stimulus size, criterion size, and difficulty for each participant and condition. These parameter values were used as modulated regressors to create a highly constrained model for the fMRI analysis that accounted for several components of the decision process. The results show that perceptual criteria values were reflected by activity in left inferior temporal cortex, a region known to represent objects and their physical properties, whereas stimulus size was reflected by activation in occipital cortex. A frontoparietal network of regions, including dorsolateral prefrontal cortex and superior parietal lobule, corresponded to the decision variables resulting from the downstream stimulus–criterion comparison, independent of stimulus type. The results provide novel evidence that perceptual criteria are represented in stimulus space and serve as inputs to be compared with the presented stimulus, recruiting a common network of decision regions shown to be active in other simple decisions. This work advances our understanding of the neural correlates of decision flexibility and adjustments of behavioral bias.

Neural Correlates of Anticipation Risk Reflect Risk Preferences


Sarah Rudorf, Kerstin Preuschoff, and Bernd Weber
J. Neurosci. 2012;32 16683-16692
http://www.jneurosci.org/cgi/content/abstract/32/47/16683?etoc

Individual risk preferences have a large influence on decisions, such as financial investments, career and health choices, or gambling. Decision making under risk has been studied both behaviorally and on a neural level. It remains unclear, however, how risk attitudes are encoded and integrated with choice. Here, we investigate how risk preferences are reflected in neural regions known to process risk. We collected functional magnetic resonance images of 56 human subjects during a gambling task (Preuschoff et al., 2006). Subjects were grouped into risk averters and risk seekers according to the risk preferences they revealed in a separate lottery task. We found that during the anticipation of high-risk gambles, risk averters show stronger responses in ventral striatum and anterior insula compared to risk seekers. In addition, risk prediction error signals in anterior insula, inferior frontal gyrus, and anterior cingulate indicate that risk averters do not dissociate properly between gambles that are more or less risky than expected. We suggest this may result in a general overestimation of prospective risk and lead to risk avoidance behavior. This is the first study to show that behavioral risk preferences are reflected in the passive evaluation of risky situations. The results have implications on public policies in the financial and health domain.

Precedence of the Eye Region in Neural Processing of Faces


Elias B. Issa and James J. DiCarlo
J. Neurosci. 2012;32 16666-16682
http://www.jneurosci.org/cgi/content/abstract/32/47/16666?etoc

Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of “face-selective” cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features—consistent with parts-based models—grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy.

Robust Representation of Stable Object Values in the Oculomotor Basal Ganglia


Masaharu Yasuda, Shinya Yamamoto, and Okihide Hikosaka
J. Neurosci. 2012;32 16917-16932 Open Access
http://www.jneurosci.org/cgi/content/abstract/32/47/16917?etoc

Our gaze tends to be directed to objects previously associated with rewards. Such object values change flexibly or remain stable. Here we present evidence that the monkey substantia nigra pars reticulata (SNr) in the basal ganglia represents stable, rather than flexible, object values. After across-day learning of object–reward association, SNr neurons gradually showed a response bias to surprisingly many visual objects: inhibition to high-valued objects and excitation to low-valued objects. Many of these neurons were shown to project to the ipsilateral superior colliculus. This neuronal bias remained intact even after >100 d without further learning. In parallel with the neuronal bias, the monkeys tended to look at high-valued objects. The neuronal and behavioral biases were present even if no value was associated during testing. These results suggest that SNr neurons bias the gaze toward objects that were consistently associated with high values in one's history.

Dynamic Fluctuations in Dopamine Efflux in the Prefrontal Cortex and Nucleus Accumbens during Risk-Based Decision Making


Jennifer R. St. Onge, Soyon Ahn, Anthony G. Phillips, and Stan B. Floresco
J. Neurosci. 2012;32 16880-16891
http://www.jneurosci.org/cgi/content/abstract/32/47/16880?etoc

Mesocorticolimbic dopamine (DA) has been implicated in cost/benefit decision making about risks and rewards. The prefrontal cortex (PFC) and nucleus accumbens (NAc) are two DA terminal regions that contribute to decision making in distinct manners. However, how fluctuations of tonic DA levels may relate to different aspects of decision making remains to be determined. The present study measured DA efflux in the PFC and NAc with microdialysis in well trained rats performing a probabilistic discounting task. Selection of a small/certain option always delivered one pellet, whereas another, large/risky option yielded four pellets, with probabilities that decreased (100–12.5%) or increased (12.5–100%) across four blocks of trials. Yoked-reward groups were also included to control for reward delivery. PFC DA efflux during decision making decreased or increased over a session, corresponding to changes in large/risky reward probabilities. Similar profiles were observed from yoked-rewarded rats, suggesting that fluctuations in PFC DA reflect changes in the relative rate of reward received. NAc DA efflux also showed decreasing/increasing trends over the session during both tasks. However, DA efflux was higher during decision making on free- versus forced-choice trials and during periods of greater reward uncertainty. Moreover, changes in NAc DA closely tracked shifts in choice biases. These data reveal dynamic and dissociable fluctuations in PFC and NAc DA transmission associated with different aspects of risk-based decision making. PFC DA may signal changes in reward availability that facilitates modification of choice biases, whereas NAc DA encodes integrated signals about reward rates, uncertainty, and choice, reflecting implementation of decision policies.

2012年11月15日木曜日

How Glitter Relates to Gold: Similarity-Dependent Reward Prediction Errors in the Human Striatum


Thorsten Kahnt, Soyoung Q Park, Christopher J. Burke, and Philippe N. Tobler
J. Neurosci. 2012;32 16521-16529
http://www.jneurosci.org/cgi/content/abstract/32/46/16521?etoc

Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus–striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.

2012年11月14日水曜日

Action-Specific Value Signals in Reward-Related Regions of the Human Brain


Thomas H. B. FitzGerald, Karl J. Friston, and Raymond J. Dolan
J. Neurosci. 2012;32 16417-16423
http://www.jneurosci.org/cgi/content/abstract/32/46/16417?etoc

Estimating the value of potential actions is crucial for learning and adaptive behavior. We know little about how the human brain represents action-specific value outside of motor areas. This is, in part, due to a difficulty in detecting the neural correlates of value using conventional (region of interest) functional magnetic resonance imaging (fMRI) analyses, due to a potential distributed representation of value. We address this limitation by applying a recently developed multivariate decoding method to high-resolution fMRI data in subjects performing an instrumental learning task. We found evidence for action-specific value signals in circumscribed regions, specifically ventromedial prefrontal cortex, putamen, thalamus, and insula cortex. In contrast, action-independent value signals were more widely represented across a large set of brain areas. Using multivariate Bayesian model comparison, we formally tested whether value–specific responses are spatially distributed or coherent. We found strong evidence that both action-specific and action-independent value signals are represented in a distributed fashion. Our results suggest that a surprisingly large number of classical reward-related areas contain distributed representations of action-specific values, representations that are likely to mediate between reward and adaptive behavior.

Reward Stability Determines the Contribution of Orbitofrontal Cortex to Adaptive Behavior


Justin S. Riceberg and Matthew L. Shapiro
J. Neurosci. 2012;32 16402-16409
http://www.jneurosci.org/cgi/content/abstract/32/46/16402?etoc

Animals respond to changing contingencies to maximize reward. The orbitofrontal cortex (OFC) is important for flexible responding when established contingencies change, but the underlying cognitive mechanisms are debated. We tested rats with sham or OFC lesions in radial maze tasks that varied the frequency of contingency changes and measured both perseverative and non-perseverative errors. When contingencies were changed rarely, rats with sham lesions learned quickly and performed better than rats with OFC lesions. Rats with sham lesions made fewer non-perseverative errors, rarely entering non-rewarded arms, and more win–stay responses by returning to recently rewarded arms compared with rats with OFC lesions. When contingencies were changed rapidly, however, rats with sham lesions learned slower, made more non-perseverative errors and fewer lose–shift responses, and returned more often to non-rewarded arms than rats with OFC lesions. The results support the view that the OFC integrates reward history and suggest that the availability of outcome expectancy signals can either improve or impair adaptive responding depending on reward stability.

2012年11月9日金曜日

Some Consequences of Having Too Little


Anuj K. Shah, Sendhil Mullainathan, Eldar Shafir
Science 2 November 2012:
Vol. 338 no. 6107 pp. 682-685
DOI: 10.1126/science.1222426

サイエンス誌から。貧乏な人が「さらに貧乏になるような振る舞い(過度の借金など)」をしてしまうのは、目の前の問題に集中し過ぎてしまうから。「金銭的な貧乏」に限らず、「時間的な貧乏」にも同じ事が言える。 http://www.sciencemag.org/content/338/6107/682

Poor individuals often engage in behaviors, such as excessive borrowing, that reinforce the conditions of poverty. Some explanations for these behaviors focus on personality traits of the poor. Others emphasize environmental factors such as housing or financial access. We instead consider how certain behaviors stem simply from having less. We suggest that scarcity changes how people allocate attention: It leads them to engage more deeply in some problems while neglecting others. Across several experiments, we show that scarcity leads to attentional shifts that can help to explain behaviors such as overborrowing. We discuss how this mechanism might also explain other puzzles of poverty.

2012年11月7日水曜日

The Emergence and Representation of Knowledge about Social and Nonsocial Hierarchies


Dharshan Kumaran, Hans Ludwig Melo, Emrah Duzel
Neuron, Volume 76, Issue 3, 653-666, 8 November 2012

「社会的階層の学習」と扁桃体。扁桃体の活動と学習の進み具合が相関する。また、学習成績の個人差は扁桃体の大きさで説明できる。一方、海馬は社会的階層に限らず「一般的な順位の学習」に関与する。 http://www.cell.com/neuron/abstract/S0896-6273(12)00889-6

Primates are remarkably adept at ranking each other within social hierarchies, a capacity that is critical to successful group living. Surprisingly little, however, is understood about the neurobiology underlying this quintessential aspect of primate cognition. In our experiment, participants first acquired knowledge about a social and a nonsocial hierarchy and then used this information to guide investment decisions. We found that neural activity in the amygdala tracked the development of knowledge about a social, but not a nonsocial, hierarchy. Further, structural variations in amygdala gray matter volume accounted for interindividual differences in social transitivity performance. Finally, the amygdala expressed a neural signal selectively coding for social rank, whose robustness predicted the influence of rank on participants’ investment decisions. In contrast, we observed that the linear structure of both social and nonsocial hierarchies was represented at a neural level in the hippocampus. Our study implicates the amygdala in the emergence and representation of knowledge about social hierarchies and distinguishes the domain-general contribution of the hippocampus.

Neural Mechanisms of Speed-Accuracy Tradeoff


Richard P. Heitz, Jeffrey D. Schall
Neuron, Volume 76, Issue 3, 616-628, 8 November 2012

Intelligent agents balance speed of responding with accuracy of deciding. Stochastic accumulator models commonly explain this speed-accuracy tradeoff by strategic adjustment of response threshold. Several laboratories identify specific neurons in prefrontal and parietal cortex with this accumulation process, yet no neurophysiological correlates of speed-accuracy tradeoff have been described. We trained macaque monkeys to trade speed for accuracy on cue during visual search and recorded the activity of neurons in the frontal eye field. Unpredicted by any model, we discovered that speed-accuracy tradeoff is accomplished through several distinct adjustments. Visually responsive neurons modulated baseline firing rate, sensory gain, and the duration of perceptual processing. Movement neurons triggered responses with activity modulated in a direction opposite of model predictions. Thus, current stochastic accumulator models provide an incomplete description of the neural processes accomplishing speed-accuracy tradeoffs. The diversity of neural mechanisms was reconciled with the accumulator framework through an integrated accumulator model constrained by requirements of the motor system.

Inactivating Anterior Insular Cortex Reduces Risk Taking


Hironori Ishii, Shinya Ohara, Philippe N. Tobler, Ken-Ichiro Tsutsui, and Toshio Iijima
J. Neurosci. 2012;32 16031-16039
http://www.jneurosci.org/cgi/content/abstract/32/45/16031?etoc

前島皮質(眼窩前頭皮質)の活動を抑制されたラットはリスク回避(愛好)的になる。なお、リスクに関係ない実験課題における行動は変化しない。前島皮質と眼窩前頭皮質はどちらもリスク下の意思決定に重要な役割を果たすが、その働きは逆の効果を持つ。 http://www.jneurosci.org/content/32/45/16031

We often have to make risky decisions between alternatives with outcomes that can be better or worse than the outcomes of safer alternatives. Although previous studies have implicated various brain regions in risky decision making, it remains unknown which regions are crucial for balancing whether to take a risk or play it safe. Here, we focused on the anterior insular cortex (AIC), the causal involvement of which in risky decision making is still unclear, although human imaging studies have reported AIC activation in various gambling tasks. We investigated the effects of temporarily inactivating the AIC on rats' risk preference in two types of gambling tasks, one in which risk arose in reward amount and one in which it arose in reward delay. As a control within the same subjects, we inactivated the adjacent orbitofrontal cortex (OFC), which is well known to affect risk preference. In both gambling tasks, AIC inactivation decreased risk preference whereas OFC inactivation increased it. In risk-free control situations, AIC and OFC inactivations did not affect decision making. These results suggest that the AIC is causally involved in risky decision making and promotes risk taking. The AIC and OFC may be crucial for the opposing motives of whether to take a risk or avoid it.

Differential Reward Coding in the Subdivisions of the Primate Caudate during an Oculomotor Task


Kae Nakamura, Gustavo S. Santos, Ryuichi Matsuzaki, and Hiroyuki Nakahara
J. Neurosci. 2012;32 15963-15982
http://www.jneurosci.org/cgi/content/abstract/32/45/15963?etoc

「サッカード方向と報酬量の関係がブロック毎に変わる課題」でサル線条体(尾状核)から神経活動記録。「サッカード方向と報酬量の関係」は背側、「報酬量」は中央と腹側、「過去の報酬情報」は背側と中央の神経細胞がコード。尾状核の中で様々な報酬情報処理がパラレルに行われている事を示唆。

The basal ganglia play a pivotal role in reward-oriented behavior. The striatum, an input channel of the basal ganglia, is composed of subdivisions that are topographically connected with different cortical and subcortical areas. To test whether reward information is differentially processed in the different parts of the striatum, we compared reward-related neuronal activity along the dorsolateral–ventromedial axis in the caudate nucleus of monkeys performing an asymmetrically rewarded oculomotor task. In a given block, a target in one position was associated with a large reward, whereas the other target was associated with a small reward. The target position–reward value contingency was switched between blocks. We found the following: (1) activity that reflected the block-wise reward contingency emerged before the appearance of a visual target, and it was more prevalent in the dorsal, rather than central and ventral, caudate; (2) activity that was positively related to the reward size of the current trial was evident, especially after reward delivery, and it was more prevalent in the ventral and central, rather than dorsal, caudate; and (3) activity that was modulated by the memory of the outcomes of the previous trials was evident in the dorsal and central caudate. This multiple reward information, together with the target-direction information, was represented primarily by individual caudate neurons, and the different reward information was represented in caudate subpopulations with distinct electrophysiological properties, e.g., baseline firing and spike width. These results suggest parallel processing of different reward information by the basal ganglia subdivisions defined by extrinsic connections and intrinsic properties.

Dorsomedial Prefrontal Cortex Mediates Rapid Evaluations Predicting the Outcome of Romantic Interactions


Jeffrey C. Cooper, Simon Dunne, Teresa Furey, and John P. O'Doherty
J. Neurosci. 2012;32 15647-15656
http://www.jneurosci.org/cgi/content/abstract/32/45/15647?etoc

Humans frequently make real-world decisions based on rapid evaluations of minimal information; for example, should we talk to an attractive stranger at a party? Little is known, however, about how the brain makes rapid evaluations with real and immediate social consequences. To address this question, we scanned participants with functional magnetic resonance imaging (fMRI) while they viewed photos of individuals that they subsequently met at real-life “speed-dating” events. Neural activity in two areas of dorsomedial prefrontal cortex (DMPFC), paracingulate cortex, and rostromedial prefrontal cortex (RMPFC) was predictive of whether each individual would be ultimately pursued for a romantic relationship or rejected. Activity in these areas was attributable to two distinct components of romantic evaluation: either consensus judgments about physical beauty (paracingulate cortex) or individualized preferences based on a partner's perceived personality (RMPFC). These data identify novel computational roles for these regions of the DMPFC in even very rapid social evaluations. Even a first glance, then, can accurately predict romantic desire, but that glance involves a mix of physical and psychological judgments that depend on specific regions of DMPFC.

2012年10月31日水曜日

Temporal Production Signals in Parietal Cortex


Blaine A. Schneider, Geoffrey M. Ghose
PLoS Biol 10(10): e1001413. doi:10.1371/journal.pbio.1001413

We often perform movements and actions on the basis of internal motivations and without any explicit instructions or cues. One common example of such behaviors is our ability to initiate movements solely on the basis of an internally generated sense of the passage of time. In order to isolate the neuronal signals responsible for such timed behaviors, we devised a task that requires nonhuman primates to move their eyes consistently at regular time intervals in the absence of any external stimulus events and without an immediate expectation of reward. Despite the lack of sensory information, we found that animals were remarkably precise and consistent in timed behaviors, with standard deviations on the order of 100 ms. To examine the potential neural basis of this precision, we recorded from single neurons in the lateral intraparietal area (LIP), which has been implicated in the planning and execution of eye movements. In contrast to previous studies that observed a build-up of activity associated with the passage of time, we found that LIP activity decreased at a constant rate between timed movements. Moreover, the magnitude of activity was predictive of the timing of the impending movement. Interestingly, this relationship depended on eye movement direction: activity was negatively correlated with timing when the upcoming saccade was toward the neuron's response field and positively correlated when the upcoming saccade was directed away from the response field. This suggests that LIP activity encodes timed movements in a push-pull manner by signaling for both saccade initiation towards one target and prolonged fixation for the other target. Thus timed movements in this task appear to reflect the competition between local populations of task relevant neurons rather than a global timing signal.

The Impact of the Posterior Parietal and Dorsolateral Prefrontal Cortices on the Optimization of Long-Term versus Immediate Value


Brian G. Essex, Sarah A. Clinton, Lucas R. Wonderley, and David H. Zald
J. Neurosci. 2012;32 15403-15413

異時点間の意思決定。「直近の報酬」と「額の大きい将来の報酬」のどちらを選ぶか?TMSを使って右側の「後頭頂部」もしくは「前頭前野外背側部」の活動を抑制すると、直近の報酬を選びやすくなる(報酬ではなく損失を用いても同じ傾向)。 http://www.jneurosci.org/cgi/content/abstract/32/44/15403?etoc

fMRI research suggests that both the posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC) help individuals select better long-term monetary gains during intertemporal choice. Previous neuromodulation research has demonstrated that disruption of the DLPFC interferes with this ability. However, it is unclear whether the PPC performs a similarly important function during intertemporal choice, and whether the functions performed by either region impact choices involving losses. In the current study, we used low-frequency repetitive transcranial magnetic stimulation to examine whether the PPC and DLPFC both normally facilitate selection of gains and losses with better long-term value than alternatives during intertemporal choice. We found that disruption of either region in the right hemisphere led to greater selection of both gains and losses that had better immediate, but worse long-term value than alternatives. This indicates that activity in both regions helps individuals optimize long-term value relative to immediate value in general, rather than being specific to choices involving gains. However, there were slightly different patterns of effects following disruption of the right PPC and right DLPFC, suggesting that each region may perform somewhat different functions that help optimize choice.

2012年10月30日火曜日

海馬と価値意思決定 Hippocampus and value-based decision making


記憶に関係する「海馬」が「報酬処理」にも重要だという研究。出来事記憶がそれが起こった際の情動(報酬/罰)に影響されることを考えると自然だと思うけど、これまではっきり示されていなかったらしい。

「刺激A(B)とa(b)が対呈示」、「aと報酬が対呈示」の後、A、Bのどちらかを選ぶ。ヒトはAを選ぶことが多いが、その程度は海馬のfMRI活動から予測できる。海馬は「刺激間の連合」と「刺激・報酬の連合」を結びつけるのに重要。
http://www.sciencemag.org/content/338/6104/270

ラットの海馬CA1ニューロンは「行動の価値」、「行動の結果」、「行動の報酬予測(実際に選択された行動の価値)」など「価値・意思決定」に必要な報酬情報をコードしている。
http://www.jneurosci.org/content/32/43/15053

2012年10月24日水曜日

Hippocampal Neural Correlates for Values of Experienced Events


Hyunjung Lee, Jeong-Wook Ghim, Hoseok Kim, Daeyeol Lee, and MinWhan Jung J. Neurosci. 2012;32 15053-15065
http://www.jneurosci.org/cgi/content/abstract/32/43/15053?etoc

Newly experienced events are often remembered together with how rewarding the experiences are personally. Although the hippocampus is a candidate structure where subjective values are integrated with other elements of episodic memory, it is uncertain whether and how the hippocampus processes value-related information. We examined how activity of dorsal CA1 and dorsal subicular neurons in rats performing a dynamic foraging task was related to reward values that were estimated using a reinforcement learning model. CA1 neurons carried significant signals related to action values before the animal revealed its choice behaviorally, indicating that the information on the expected values of potential choice outcomes was available in CA1. Moreover, after the outcome of the animal's goal choice was revealed, CA1 neurons carried robust signals for the value of chosen action and they temporally overlapped with the signals related to the animal's goal choice and its outcome, indicating that all the signals necessary to evaluate the outcome of an experienced event converged in CA1. On the other hand, value-related signals were substantially weaker in the subiculum. These results suggest a major role of CA1 in adding values to experienced events during episodic memory encoding. Given that CA1 neuronal activity is modulated by diverse attributes of an experienced event, CA1 might be a place where all the elements of episodic memory are integrated.

Changes in Neural Connectivity Underlie Decision Threshold Modulation for Reward Maximization


Nikos Green, Guido P. Biele, and Hauke R. Heekeren
J. Neurosci. 2012;32 14942-14950
http://www.jneurosci.org/cgi/content/abstract/32/43/14942?etoc

Using neuroimaging in combination with computational modeling, this study shows that decision threshold modulation for reward maximization is accompanied by a change in effective connectivity within corticostriatal and cerebellar–striatal brain systems. Research on perceptual decision making suggests that people make decisions by accumulating sensory evidence until a decision threshold is crossed. This threshold can be adjusted to changing circumstances, to maximize rewards. Decision making thus requires effectively managing the amount of accumulated evidence versus the amount of available time. Importantly, the neural substrate of this decision threshold modulation is unknown. Participants performed a perceptual decision-making task in blocks with identical duration but different reward schedules. Behavioral and modeling results indicate that human subjects modulated their decision threshold to maximize net reward. Neuroimaging results indicate that decision threshold modulation was achieved by adjusting effective connectivity within corticostriatal and cerebellar–striatal brain systems, the former being responsible for processing of accumulated sensory evidence and the latter being responsible for automatic, subsecond temporal processing. Participants who adjusted their threshold to a greater extent (and gained more net reward) also showed a greater modulation of effective connectivity. These results reveal a neural mechanism that underlies decision makers' abilities to adjust to changing circumstances to maximize reward.

2012年10月19日金曜日

Translating upwards: linking the neural and social sciences via neuroeconomics


Clement Levallois, John A. Clithero, Paul Wouters, Ale Smidts and Scott A. Huettel
Nature Reviews Neuroscience 13, 789-797 (November 2012) | doi:10.1038/nrn3354

The social and neural sciences share a common interest in understanding the mechanisms that underlie human behaviour. However, interactions between neuroscience and social science disciplines remain strikingly narrow and tenuous. We illustrate the scope and challenges for such interactions using the paradigmatic example of neuroeconomics. Using quantitative analyses of both its scientific literature and the social networks in its intellectual community, we show that neuroeconomics now reflects a true disciplinary integration, such that research topics and scientific communities with interdisciplinary span exert greater influence on the field. However, our analyses also reveal key structural and intellectual challenges in balancing the goals of neuroscience with those of the social sciences. To address these challenges, we offer a set of prescriptive recommendations for directing future research in neuroeconomics.

Preference by Association: How Memory Mechanisms in the Hippocampus Bias Decisions


G. Elliott Wimmer, Daphna Shohamy
Science 12 October 2012:
Vol. 338 no. 6104 pp. 270-273
DOI: 10.1126/science.1223252

Every day people make new choices between alternatives that they have never directly experienced. Yet, such decisions are often made rapidly and confidently. Here, we show that the hippocampus, traditionally known for its role in building long-term declarative memories, enables the spread of value across memories, thereby guiding decisions between new choice options. Using functional brain imaging in humans, we discovered that giving people monetary rewards led to activation of a preestablished network of memories, spreading the positive value of reward to nonrewarded items stored in memory. Later, people were biased to choose these nonrewarded items. This decision bias was predicted by activity in the hippocampus, reactivation of associated memories, and connectivity between memory and reward regions in the brain. These findings explain how choices among new alternatives emerge automatically from the associative mechanisms by which the brain builds memories. Further, our findings demonstrate a previously unknown role for the hippocampus in value-based decisions.

2012年10月17日水曜日

Selectively altering belief formation in the human brain


Tali Sharot, Ryota Kanai, David Marston, Christoph W. Korn, Geraint Rees, and Raymond J. Dolan
PNAS October 16, 2012 vol. 109 no. 42 17058-17062

Humans form beliefs asymmetrically; we tend to discount bad news but embrace good news. This reduced impact of unfavorable information on belief updating may have important societal implications, including the generation of financial market bubbles, ill preparedness in the face of natural disasters, and overly aggressive medical decisions. Here, we selectively improved people’s tendency to incorporate bad news into their beliefs by disrupting the function of the left (but not right) inferior frontal gyrus using transcranial magnetic stimulation, thereby eliminating the engrained “good news/bad news effect.” Our results provide an instance of how selective disruption of regional human brain function paradoxically enhances the ability to incorporate unfavorable information into beliefs of vulnerability.

2012年10月11日木曜日

Theory and Simulation in Neuroscience


Science 5 October 2012:
Vol. 338 no. 6103 pp. 60-65
DOI: 10.1126/science.1227356
Wulfram Gerstner, Henning Sprekeler, Gustavo Deco

Modeling work in neuroscience can be classified using two different criteria. The first one is the complexity of the model, ranging from simplified conceptual models that are amenable to mathematical analysis to detailed models that require simulations in order to understand their properties. The second criterion is that of direction of workflow, which can be from microscopic to macroscopic scales (bottom-up) or from behavioral target functions to properties of components (top-down). We review the interaction of theory and simulation using examples of top-down and bottom-up studies and point to some current developments in the fields of computational and theoretical neuroscience.

2012年10月9日火曜日

Sensitivity to Temporal Reward Structure in Amygdala Neurons


Maria A. Bermudez, Carl Göbel, Wolfram Schultz
Current Biology, Volume 22, Issue 19, 1839-1844, 06 September 2012

The time of reward and the temporal structure of reward occurrence fundamentally influence behavioral reinforcement and decision processes [1,2,3,4,5,6,7,8,9,10,11]. However, despite knowledge about timing in sensory and motor systems [12,13,14,15,16,17], we know little about temporal mechanisms of neuronal reward processing. In this experiment, visual stimuli predicted different instantaneous probabilities of reward occurrence that resulted in specific temporal reward structures. Licking behavior demonstrated that the animals had developed expectations for the time of reward that reflected the instantaneous reward probabilities. Neurons in the amygdala, a major component of the brain's reward system [18,19,20,21,22,23,24,25,26,27,28,29], showed two types of reward signal, both of which were sensitive to the expected time of reward. First, the time courses of anticipatory activity preceding reward delivery followed the specific instantaneous reward probabilities and thus paralleled the temporal reward structures. Second, the magnitudes of responses following reward delivery covaried with the instantaneous reward probabilities, reflecting the influence of temporal reward structures at the moment of reward delivery. In being sensitive to temporal reward structure, the reward signals of amygdala neurons reflected the temporally specific expectations of reward. The data demonstrate an active involvement of amygdala neurons in timing processes that are crucial for reward function.

Generalized Perceptual Learning in the Absence of Sensory Adaptation


Hila Harris, Michael Gliksberg, Dov Sagi
Current Biology, Volume 22, Issue 19, 1813-1817, 23 August 2012

Repeated performance of visual tasks leads to long-lasting increased sensitivity to the trained stimulus, a phenomenon termed perceptual learning. A ubiquitous property of visual learning is specificity: performance improvement obtained during training applies only for the trained stimulus features, which are thought to be encoded in sensory brain regions [1,2,3]. However, recent results show performance decrements with an increasing number of trials within a training session [4,5]. This selective sensitivity reduction is thought to arise due to sensory adaptation [5,6]. Here we show, using the standard texture discrimination task [7], that location specificity is a consequence of sensory adaptation; that is, it results from selective reduced sensitivity due to repeated stimulation. Observers practiced the texture task with the target presented at a fixed location within a background texture. To remove adaptation, we added task-irrelevant (“dummy”) trials with the texture oriented 45° relative to the target’s orientation, known to counteract adaptation [8]. The results indicate location specificity with the standard paradigm, but complete generalization to a new location when adaptation is removed. We suggest that adaptation interferes with invariant pattern-discrimination learning by inducing network-dependent changes in local visual representations.

2012年10月4日木曜日

Network Resets in Medial Prefrontal Cortex Mark the Onset of Behavioral Uncertainty


Mattias P. Karlsson, Dougal G. R. Tervo, Alla Y. Karpova
Science 5 October 2012: Vol. 338 no. 6103 pp. 135-139

Regions within the prefrontal cortex are thought to process beliefs about the world, but little is known about the circuit dynamics underlying the formation and modification of these beliefs. Using a task that permits dissociation between the activity encoding an animal’s internal state and that encoding aspects of behavior, we found that transient increases in the volatility of activity in the rat medial prefrontal cortex accompany periods when an animal’s belief is modified after an environmental change. Activity across the majority of sampled neurons underwent marked, abrupt, and coordinated changes when prior belief was abandoned in favor of exploration of alternative strategies. These dynamics reflect network switches to a state of instability, which diminishes over the period of exploration as new stable representations are formed.

In Monkeys Making Value-Based Decisions, LIP Neurons Encode Cue Salience and Not Action Value


Marvin L. Leathers, Carl R. Olson
Science 5 October 2012: Vol. 338 no. 6103 pp. 132-135

LIPニューロンの活動は「報酬の量」にも「罰の量」にも正の相関がある。つまり、価値ではなく(価値なら、罰の量には負相関するはず)、Salience(顕著性)をコード。で、顕著性は意思決定にどう効いてくるのだろう?やっぱり学習率なのかな?

In monkeys deciding between alternative saccadic eye movements, lateral intraparietal (LIP) neurons representing each saccade fire at a rate proportional to the value of the reward expected upon its completion. This observation has been interpreted as indicating that LIP neurons encode saccadic value and that they mediate value-based decisions between saccades. Here, we show that LIP neurons representing a given saccade fire strongly not only if it will yield a large reward but also if it will incur a large penalty. This finding indicates that LIP neurons are sensitive to the motivational salience of cues. It is compatible neither with the idea that LIP neurons represent action value nor with the idea that value-based decisions take place in LIP neurons.

2012年10月3日水曜日

Hard to “tune in”: neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder


Hiroki C. Tanabe, Hirotaka Kosaka, Daisuke N. Saito, Takahiko Koike, Masamichi J. Hayashi, Keise Izuma, Hidetsugu Komeda, Makoto Ishitobi, Masao Omori, Toshio Munesue, Hidehiko Okazawa, Yuji Wada, and Norihiro Sadato
Front. Hum. Neurosci. 6:268. doi: 10.3389/fnhum.2012.00268

Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner.

Impaired Learning of Social Compared to Monetary Rewards in Autism


Alice Lin, Antonio Rangel, and Ralph Adolphs
Front. Neurosci. 6:143. doi: 10.3389/fnins.2012.00143

A leading hypothesis to explain the social dysfunction in people with autism spectrum disorders (ASD) is that they exhibit a deficit in reward processing and motivation specific to social stimuli. However, there have been few direct tests of this hypothesis to date. Here we used an instrumental reward learning task that contrasted learning with social rewards (pictures of positive and negative faces) against learning with monetary reward (winning and losing money). The two tasks were structurally identical except for the type of reward, permitting direct comparisons. We tested 10 high-functioning people with ASD (7M, 3F) and 10 healthy controls who were matched on gender, age, and education. We found no significant differences between the two groups in terms of overall ability behaviorally to discriminate positive from negative slot machines, reaction-times, and valence ratings, However, there was a specific impairment in the ASD group in learning to choose social rewards, compared to monetary rewards: they had a significantly lower cumulative number of choices of the most rewarding social slot machine, and had a significantly slower initial learning rate for the socially rewarding slot machine, compared to the controls. The findings show a deficit in reward learning in ASD that is greater for social rewards than for monetary rewards, and support the hypothesis of a disproportionate impairment in social reward processing in ASD.

A computational approach to “free will” constrained by the games we play


Kenneth T. Kishida
Front. Integr. Neurosci. 6:85. doi: 10.3389/fnint.2012.00085

Human choice is not free—we are bounded by a multitude of biological constraints. Yet, within the various landscapes we face, we do express choice, preference, and varying degrees of so-called willful behavior. Moreover, it appears that the capacity for choice in humans is variable. Empirical studies aimed at investigating the experience of “free will” will benefit from theoretical disciplines that constrain the language used to frame the relevant issues. The combination of game theory and computational reinforcement learning theory with empirical methods is already beginning to provide valuable insight into the biological variables underlying capacity for choice in humans and how things may go awry in individuals with brain disorders. These disciplines operate within abstract quantitative landscapes, but have successfully been applied to investigate strategic and adaptive human choice guided by formal notions of optimal behavior. Psychiatric illness is an extreme, but interesting arena for studying human capacity for choice. The experiences and behaviors of patients suggest these individuals fundamentally suffer from a diminished capacity of willful choice. Herein, I will briefly discuss recent applications of computationally guided approaches to human choice behavior and the underlying neurobiology. These approaches can be integrated into empirical investigation at multiple temporal scales of analysis including the growing body of experiments in human functional magnetic resonance imaging (fMRI), and newly emerging sub-second electrochemical and electrophysiological measurements in the human brain. These cross-disciplinary approaches hold promise for revealing the underlying neurobiological mechanisms for the variety of choice capacity in humans.

Twenty-Five Lessons from Computational Neuromodulation


Peter Dayan
Neuron, Volume 76, Issue 1, 240-256, 4 October 2012

Neural processing faces three rather different, and perniciously tied, communication problems. First, computation is radically distributed, yet point-to-point interconnections are limited. Second, the bulk of these connections are semantically uniform, lacking differentiation at their targets that could tag particular sorts of information. Third, the brain's structure is relatively fixed, and yet different sorts of input, forms of processing, and rules for determining the output are appropriate under different, and possibly rapidly changing, conditions. Neuromodulators address these problems by their multifarious and broad distribution, by enjoying specialized receptor types in partially specific anatomical arrangements, and by their ability to mold the activity and sensitivity of neurons and the strength and plasticity of their synapses. Here, I offer a computationally focused review of algorithmic and implementational motifs associated with neuromodulators, using decision making in the face of uncertainty as a running example.

Effects of Decision Variables and Intraparietal Stimulation on Sensorimotor Oscillatory Activity in the Human Brain


Ian C. Gould, Anna C. Nobre, Valentin Wyart, and Matthew F. S. Rushworth
J. Neurosci. 2012;32 13805-13818
http://www.jneurosci.org/cgi/content/abstract/32/40/13805?etoc

To decide effectively, information must not only be integrated from multiple sources, but it must be distributed across the brain if it is to influence structures such as motor cortex that execute choices. Human participants integrated information from multiple, but only partially informative, cues in a probabilistic reasoning task in an optimal manner. We tested whether lateralization of alpha- and beta-band oscillatory brain activity over sensorimotor cortex reflected decision variables such as the sum of the evidence provided by observed cues, a key quantity for decision making, and whether this could be dissociated from an update signal reflecting processing of the most recent cue stimulus. Alpha- and beta-band activity in the electroencephalogram reflected the logarithm of the likelihood ratio associated with the each piece of information witnessed, and the same quantity associated with the previous cues. Only the beta-band, however, reflected the most recent cue in a manner that suggested it reflected updating processes associated with cue processing. In a second experiment, transcranial magnetic stimulation-induced disruption was used to demonstrate that the intraparietal sulcus played a causal role both in decision making and in the appearance of sensorimotor beta-band activity.

2012年9月27日木曜日

Disruption of Reconsolidation Erases a Fear Memory Trace in the Human Amygdala


Thomas Agren, Jonas Engman, Andreas Frick, Johannes Björkstrand, Elna-Marie Larsson, Tomas Furmark, Mats Fredrikson
Science 21 September 2012: Vol. 337 no. 6101 pp. 1550-1552

Memories become labile when recalled. In humans and rodents alike, reactivated fear memories can be attenuated by disrupting reconsolidation with extinction training. Using functional brain imaging, we found that, after a conditioned fear memory was formed, reactivation and reconsolidation left a memory trace in the basolateral amygdala that predicted subsequent fear expression and was tightly coupled to activity in the fear circuit of the brain. In contrast, reactivation followed by disrupted reconsolidation suppressed fear, abolished the memory trace, and attenuated fear-circuit connectivity. Thus, as previously demonstrated in rodents, fear memory suppression resulting from behavioral disruption of reconsolidation is amygdala-dependent also in humans, which supports an evolutionarily conserved memory-update mechanism.

2012年9月26日水曜日

Cognitive Regulation during Decision Making Shifts Behavioral Control between Ventromedial and Dorsolateral Prefrontal Value Systems


Cendri A. Hutcherson, Hilke Plassmann, James J. Gross, and Antonio Rangel
J. Neurosci. 2012;32 13543-13554
http://www.jneurosci.org/cgi/content/abstract/32/39/13543?etoc

Cognitive regulation is often used to influence behavioral outcomes. However, the computational and neurobiological mechanisms by which it affects behavior remain unknown. We studied this issue using an fMRI task in which human participants used cognitive regulation to upregulate and downregulate their cravings for foods at the time of choice. We found that activity in both ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC) correlated with value. We also found evidence that two distinct regulatory mechanisms were at work: value modulation, which operates by changing the values assigned to foods in vmPFC and dlPFC at the time of choice, and behavioral control modulation, which operates by changing the relative influence of the vmPFC and dlPFC value signals on the action selection process used to make choices. In particular, during downregulation, activation decreased in the value-sensitive region of dlPFC (indicating value modulation) but not in vmPFC, and the relative contribution of the two value signals to behavior shifted toward the dlPFC (indicating behavioral control modulation). The opposite pattern was observed during upregulation: activation increased in vmPFC but not dlPFC, and the relative contribution to behavior shifted toward the vmPFC. Finally, ventrolateral PFC and posterior parietal cortex were more active during both upregulation and downregulation, and were functionally connected with vmPFC and dlPFC during cognitive regulation, which suggests that they help to implement the changes to the decision-making circuitry generated by cognitive regulation.

2012年9月25日火曜日

Differential Representations of Prior and Likelihood Uncertainty in the Human Brain


Iris Vilares, James D. Howard, Hugo L. Fernandes, Jay A. Gottfried, Konrad P. Kording
Current Biology, Volume 22, Issue 18, 1641-1648, 26 July 2012

Background
Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood.

Results
By varying prior and likelihood uncertainty in a decision-making task while performing neuroimaging in humans, we found that prior and likelihood uncertainty had quite distinct representations. Whereas likelihood uncertainty activated brain regions along the early stages of the visuomotor pathway, representations of prior uncertainty were identified in specialized brain areas outside this pathway, including putamen, amygdala, insula, and orbitofrontal cortex. Furthermore, the magnitude of brain activity in the putamen predicted individuals' personal tendencies to rely more on either prior or current information.

Conclusions
Our results suggest different pathways by which prior and likelihood uncertainty map onto the human brain and provide a potential neural correlate for higher reliance on current or prior knowledge. Overall, these findings offer insights into the neural pathways that may allow humans to make decisions close to the optimal defined by a Bayesian statistical framework.

2012年9月19日水曜日

An Agent Independent Axis for Executed and Modeled Choice in Medial Prefrontal Cortex


Antoinette Nicolle, Miriam C. Klein-Flügge, Laurence T. Hunt, Ivo Vlaev, Raymond J. Dolan, Timothy E.J. Behrens
Neuron, Volume 75, Issue 6, 1114-1121, 20 September 2012

Adaptive success in social animals depends on an ability to infer the likely actions of others. Little is known about the neural computations that underlie this capacity. Here, we show that the brain models the values and choices of others even when these values are currently irrelevant. These modeled choices use the same computations that underlie our own choices, but are resolved in a distinct neighboring medial prefrontal brain region. Crucially, however, when subjects choose on behalf of a partner instead of themselves, these regions exchange their functional roles. Hence, regions that represented values of the subject’s executed choices now represent the values of choices executed on behalf of the partner, and those that previously modeled the partner now model the subject. These data tie together neural computations underlying self-referential and social inference, and in so doing establish a new functional axis characterizing the medial wall of prefrontal cortex.

Spontaneous giving and calculated greed


David G. Rand, Joshua D. Greene & Martin A. Nowak
Nature 489, 427–430 (20 September 2012)

公共財ゲーム。協力者の方が反応時間が短い。被験者に速く(遅く)意思決定させると協力率が上がる(下がる)。直観的に(よく考えて)決定させると協力率が上がる(下がる)。→協力は自動的・デフォルト的処理である(裏切りは計算や熟考を要する処理)。

Cooperation is central to human social behaviour1, 2, 3, 4, 5, 6, 7, 8, 9. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework10, 11, 12, 13, 14, 15, 16, 17, 18. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.

2012年9月18日火曜日

Monkeys benefit from reciprocity without the cognitive burden

サルの互恵的行動は「しっぺ返し」的動機に起因しない。

Malini Suchak and Frans B. M. de Waal
PNAS September 18, 2012 vol. 109 no. 38 15191-15196

The debate about the origins of human prosociality has focused on the presence or absence of similar tendencies in other species, and, recently, attention has turned to the underlying mechanisms. We investigated whether direct reciprocity could promote prosocial behavior in brown capuchin monkeys (Cebus apella). Twelve capuchins tested in pairs could choose between two tokens, with one being “prosocial” in that it rewarded both individuals (i.e., 1/1), and the other being “selfish” in that it rewarded the chooser only (i.e., 1/0). Each monkey’s choices with a familiar partner from their own group was compared with choices when paired with a partner from a different group. Capuchins were spontaneously prosocial, selecting the prosocial option at the same rate regardless of whether they were paired with an in-group or out-group partner. This indicates that interaction outside of the experimental setting played no role. When the paradigm was changed, such that both partners alternated making choices, prosocial preference significantly increased, leading to mutualistic payoffs. As no contingency could be detected between an individual’s choice and their partner’s previous choice, and choices occurred in rapid succession, reciprocity seemed of a relatively vague nature akin to mutualism. Having the partner receive a better reward than the chooser (i.e., 1/2) during the alternating condition increased the payoffs of mutual prosociality, and prosocial choice increased accordingly. The outcome of several controls made it hard to explain these results on the basis of reward distribution or learned preferences, and rather suggested that joint action promotes prosociality, resulting in so-called attitudinal reciprocity.

Lateralization of observational fear learning at the cortical but not thalamic level in mice


Sangwoo Kim, Ferenc Mátyás, Sukchan Lee, László Acsády, and Hee-Sup Shin
PNAS September 18, 2012 vol. 109 no. 38 15497-15501

Major cognitive and emotional faculties are dominantly lateralized in the human cerebral cortex. The mechanism of this lateralization has remained elusive owing to the inaccessibility of human brains to many experimental manipulations. In this study we demonstrate the hemispheric lateralization of observational fear learning in mice. Using unilateral inactivation as well as electrical stimulation of the anterior cingulate cortex (ACC), we show that observational fear learning is controlled by the right but not the left ACC. In contrast to the cortex, inactivation of either left or right thalamic nuclei, both of which are in reciprocal connection to ACC, induced similar impairment of this behavior. The data suggest that lateralization of negative emotions is an evolutionarily conserved trait and mainly involves cortical operations. Lateralization of the observational fear learning behavior in a rodent model will allow detailed analysis of cortical asymmetry in cognitive functions.

2012年9月15日土曜日

J1ビザ面接@アメリカ大使館


J1ビザ申請のためアメリカ大使館に行ってきました。

【事前の準備】

参考にしたのは、
http://d.hatena.ne.jp/oxon/20100127/1264571940
http://www.joten.info/misc/living_in_US.html
http://hontolab.org/dailylife/visa-applicaiton/
など。
もちろん、アメリカ大使館のページを隅々まで読み込み、諸々確認しましたよ。
http://www.ustraveldocs.com/jp_jp/jp-niv-typej.asp

用意した書類は、
・DS-2019
受入先の大学に発行してもらう。
なお、その際には「学振の採用証明(英文、給与額記載)」と「CV(履歴書)」が必要。
また、その後に色々と書類にやり取りしたりで、結局合計一ヶ月強かかった。

・DS160
DS-2019の情報を基にWebで作成するが、とにかく入力する情報が膨大。
一定時間経過するとタイムアウトするので、申請番号を書き留めておく。そして、こまめにセーブ。
また、写真データ、過去の渡米歴なども必要。
ここでトラブル発生。UPした写真が最後の確認ページで表示されず、「PHOTO ARCHIVED」という文字だけが(直前のページで、「写真のUPは成功しました」とは出てたけど…)。
どうしようもないので、確認ページを「PHOTO ARCHIVED」のままで印刷する(←結局、面接では大丈夫でした!)。

・I-901 SEVIS費用支払い証明
DS-2019に記載されている情報が必要。
最終ページを印刷する。

・面接予約確認書
Webで予約した際に取得。印刷する。

・成績証明
大学院博士後期課程には授業がなかったため、修了証明書で代替。

・受入先の大学からのInvitation Letter
・ビザ申請費用振込証明(ATMのPayEasy使用:明細をDS160の確認ページに貼付)
・証明写真(5cm x 5cm:DS160の確認ページに貼付)
・婚姻証明(原本+英訳コピー)
・財政証明(学振の採用証明書:英文/給与額記載)
・過去10年分のパスポート

これらの書類をクリアファイルに入れて持って行く。
なお、書類を入れる順番が決まっているので注意。
http://japan2.usembassy.gov/pdfs/wwwf-visa-j-docs-arrangement.pdf
注:最近はレターパックは不要。郵送代はビザ申請料金に含まれている。

【面接当日】

・7:45頃
溜池山王のアメリカ大使館に到着し、門の前に並ぶ(面接予約は8:15)。
朝の微妙な時間でトイレに行きたくて苦しむ(危うくう◯こを漏らしそうに:笑)。

・8:00
入念なセキュリティーチェックを受けて入館開始。
携帯電話などはここで預ける。

・8:10頃
建物の入り口でもう一度セキュリティーチェック。
その後、持参した書類について簡単なチェックを受け、整理番号をもらう(J2ビザを申請する妻と二人で一つの整理番号でした)。
ここでやっとトイレに。助かった…

・8:30頃
書類不備で呼び出される。
「DS160の写真UPの件か!?」とビビったけど、実際は「婚姻証明の日本語原本を付けて下さい」と言われる。
持参してたのでそれを係の人に渡す(念のため持って来てて良かった…)。
そして再び待機。

・9:30頃
呼び出されて指紋採取。
指が乾いていたのか、何度かやり直したが2、3分で終了。
再び待機。

10:00頃
いよいよ(やっと)面接の呼び出し。
妻と一緒にブース(銀行の窓口みたいな感じ、立ったまま面接を受ける)に並ぶ。
(なお、この呼び出し。すぐ終わる人は2、3組まとめて呼び出されているっぽい。我々もまとめて呼び出された)
面接での質問は、ぼくに対しては、
-どこに行くのですか?
-何を研究するのですか?
-給料はどこから出るのですか?
-博士号はどこの大学で取得しましたか?
妻に対しては、
-短期間でパスポートを更新しているのはなぜですか?
(結婚後、姓を変更するためにパスポートを更新していた)
など簡単なものばかりでした。
領事はイケメンでとってもナイスな人で、面接は5分程度で終了!

・3日後
無事にビザが到着。
やったー。
めでたしめでたしでした…

2012年9月12日水曜日

Distinct Information Representation and Processing for Goal-Directed Behavior in the Dorsolateral and Ventrolateral Prefrontal Cortex and the Dorsal Premotor Cortex


Tomoko Yamagata, Yoshihisa Nakayama, Jun Tanji, and Eiji Hoshi
J. Neurosci. 2012;32 12934-12949
http://www.jneurosci.org/cgi/content/abstract/32/37/12934?etoc

Although the lateral prefrontal cortex (lPFC) and dorsal premotor cortex (PMd) are thought to be involved in goal-directed behavior, the specific roles of each area still remain elusive. To characterize and compare neuronal activity in two sectors of the lPFC [dorsal (dlPFC) and ventral (vlPFC)] and the PMd, we designed a behavioral task for monkeys to explore the differences in their participation in four aspects of information processing: encoding of visual signals, behavioral goal retrieval, action specification, and maintenance of relevant information. We initially presented a visual object (an instruction cue) to instruct a behavioral goal (reaching to the right or left of potential targets). After a subsequent delay, a choice cue appeared at various locations on a screen, and the animals could specify an action to achieve the behavioral goal. We found that vlPFC neurons amply encoded object features of the instruction cues for behavioral goal retrieval and, subsequently, spatial locations of the choice cues for specifying the actions. By contrast, dlPFC and PMd neurons rarely encoded the object features, although they reflected the behavioral goals throughout the delay period. After the appearance of the choice cues, the PMd held information for action throughout the specification and preparation of reaching movements. Remarkably, lPFC neurons represented information for the behavioral goal continuously, even after the action specification as well as during its execution. These results indicate that area-specific representation and information processing at progressive stages of the perception–action transformation in these areas underlie goal-directed behavior.

Dynamic Estimation of Task-Relevant Variance in Movement under Risk


Michael S. Landy, Julia Trommershauser, and Nathaniel D. Daw
J. Neurosci. 2012;32 12702-12711
http://www.jneurosci.org/cgi/content/abstract/32/37/12702?etoc

Humans take into account their own movement variability as well as potential consequences of different movement outcomes in planning movement trajectories. When variability increases, planned movements are altered so as to optimize expected consequences of the movement. Past research has focused on the steady-state responses to changing conditions of movement under risk. Here, we study the dynamics of such strategy adjustment in a visuomotor decision task in which subjects reach toward a display with regions that lead to rewards and penalties, under conditions of changing uncertainty. In typical reinforcement learning tasks, subjects should base subsequent strategy by computing an estimate of the mean outcome (e.g., reward) in recent trials. In contrast, in our task, strategy should be based on a dynamic estimate of recent outcome uncertainty (i.e., squared error). We find that subjects respond to increased movement uncertainty by aiming movements more conservatively with respect to penalty regions, and that the estimate of uncertainty they use is well characterized by a weighted average of recent squared errors, with higher weights given to more recent trials.

Neural Dynamics of Choice: Single-Trial Analysis of Decision-Related Activity in Parietal Cortex


Anil Bollimunta, Douglas Totten, and Jochen Ditterich
J. Neurosci. 2012;32 12684-12701
http://www.jneurosci.org/cgi/content/abstract/32/37/12684?etoc

Previous neurophysiological studies of perceptual decision-making have focused on single-unit activity, providing insufficient information about how individual decisions are accomplished. For the first time, we recorded simultaneously from multiple decision-related neurons in parietal cortex of monkeys performing a perceptual decision task and used these recordings to analyze the neural dynamics during single trials. We demonstrate that decision-related lateral intraparietal area neurons typically undergo gradual changes in firing rate during individual decisions, as predicted by mechanisms based on continuous integration of sensory evidence. Furthermore, we identify individual decisions that can be described as a change of mind: the decision circuitry was transiently in a state associated with a different choice before transitioning into a state associated with the final choice. These changes of mind reflected in monkey neural activity share similarities with previously reported changes of mind reflected in human behavior.

2012年9月11日火曜日

No third-party punishment in chimpanzees


Katrin Riedl, Keith Jensen, Josep Call, and Michael Tomasello
PNAS September 11, 2012 vol. 109 no. 37 14824-14829

Punishment can help maintain cooperation by deterring free-riding and cheating. Of particular importance in large-scale human societies is third-party punishment in which individuals punish a transgressor or norm violator even when they themselves are not affected. Nonhuman primates and other animals aggress against conspecifics with some regularity, but it is unclear whether this is ever aimed at punishing others for noncooperation, and whether third-party punishment occurs at all. Here we report an experimental study in which one of humans' closest living relatives, chimpanzees (Pan troglodytes), could punish an individual who stole food. Dominants retaliated when their own food was stolen, but they did not punish when the food of third-parties was stolen, even when the victim was related to them. Third-party punishment as a means of enforcing cooperation, as humans do, might therefore be a derived trait in the human lineage.

Emergence of social complexity among coastal hunter-gatherers in the Atacama Desert of northern Chile


Pablo A. Marquet, Calogero M. Santoro, Claudio Latorre, Vivien G. Standen, Sebastián R. Abades, Marcelo M. Rivadeneira, Bernardo Arriaza, and Michael E. Hochberg
PNAS September 11, 2012 vol. 109 no. 37 14754-14760

The emergence of complex cultural practices in simple hunter-gatherer groups poses interesting questions on what drives social complexity and what causes the emergence and disappearance of cultural innovations. Here we analyze the conditions that underlie the emergence of artificial mummification in the Chinchorro culture in the coastal Atacama Desert in northern Chile and southern Peru. We provide empirical and theoretical evidence that artificial mummification appeared during a period of increased coastal freshwater availability and marine productivity, which caused an increase in human population size and accelerated the emergence of cultural innovations, as predicted by recent models of cultural and technological evolution. Under a scenario of increasing population size and extreme aridity (with little or no decomposition of corpses) a simple demographic model shows that dead individuals may have become a significant part of the landscape, creating the conditions for the manipulation of the dead that led to the emergence of complex mortuary practices.

Evolution of cooperation and skew under imperfect information


Erol Akçay, Adam Meirowitz, Kristopher W. Ramsay, and Simon A. Levin
PNAS September 11, 2012 vol. 109 no. 37 14936-14941

The evolution of cooperation in nature and human societies depends crucially on how the benefits from cooperation are divided and whether individuals have complete information about their payoffs. We tackle these questions by adopting a methodology from economics called mechanism design. Focusing on reproductive skew as a case study, we show that full cooperation may not be achievable due to private information over individuals’ outside options, regardless of the details of the specific biological or social interaction. Further, we consider how the structure of the interaction can evolve to promote the maximum amount of cooperation in the face of the informational constraints. Our results point to a distinct avenue for investigating how cooperation can evolve when the division of benefits is flexible and individuals have private information.

2012年9月5日水曜日

Temporal Integration of Olfactory Perceptual Evidence in Human Orbitofrontal Cortex


Nicholas E. Bowman, Konrad P. Kording, Jay A. Gottfried
Neuron, Volume 75, Issue 5, 916-927, 6 September 2012

Given a noisy sensory world, the nervous system integrates perceptual evidence over time to optimize decision-making. Neurophysiological accumulation of sensory information is well-documented in the animal visual system, but how such mechanisms are instantiated in the human brain remains poorly understood. Here we combined psychophysical techniques, drift-diffusion modeling, and functional magnetic resonance imaging (fMRI) to establish that odor evidence integration in the human olfactory system enhances discrimination on a two-alternative forced-choice task. Model-based measures of fMRI brain activity highlighted a ramp-like increase in orbitofrontal cortex (OFC) that peaked at the time of decision, conforming to predictions derived from an integrator model. Combined behavioral and fMRI data further suggest that decision bounds are not fixed but collapse over time, facilitating choice behavior in the presence of low-quality evidence. These data highlight a key role for the orbitofrontal cortex in resolving sensory uncertainty and provide substantiation for accumulator models of human perceptual decision-making.

Reward Cues in Space: Commonalities and Differences in Neural Coding by Hippocampal and Ventral Striatal Ensembles


Carien S. Lansink, Jadin C. Jackson, Jan V. Lankelma, Rutsuko Ito, Trevor W. Robbins, Barry J. Everitt, and Cyriel M.A. Pennartz
J. Neurosci. 2012;32 12444-12459

Forming place-reward associations critically depends on the integrity of the hippocampal–ventral striatal system. The ventral striatum (VS) receives a strong hippocampal input conveying spatial-contextual information, but it is unclear how this structure integrates this information to invigorate reward-directed behavior. Neuronal ensembles in rat hippocampus (HC) and VS were simultaneously recorded during a conditioning task in which navigation depended on path integration. In contrast to HC, ventral striatal neurons showed low spatial selectivity, but rather coded behavioral task phases toward reaching goal sites. Outcome-predicting cues induced a remapping of firing patterns in the HC, consistent with its role in episodic memory. VS remapped in conjunction with the HC, indicating that remapping can take place in multiple brain regions engaged in the same task. Subsets of ventral striatal neurons showed a “flip” from high activity when cue lights were illuminated to low activity in intertrial intervals, or vice versa. The cues induced an increase in spatial information transmission and sparsity in both structures. These effects were paralleled by an enhanced temporal specificity of ensemble coding and a more accurate reconstruction of the animal's position from population firing patterns. Altogether, the results reveal strong differences in spatial processing between hippocampal area CA1 and VS, but indicate similarities in how discrete cues impact on this processing.

Separate, Causal Roles of the Caudate in Saccadic Choice and Execution in a Perceptual Decision Task


Long Ding, Joshua I. Gold
Neuron, Volume 75, Issue 5, 865-874, 6 September 2012

In contrast to the well-established roles of the striatum in movement generation and value-based decisions, its contributions to perceptual decisions lack direct experimental support. Here, we show that electrical microstimulation in the monkey caudate nucleus influences both choice and saccade response time on a visual motion discrimination task. Within a drift-diffusion framework, these effects consist of two components. The perceptual component biases choices toward ipsilateral targets, away from the neurons’ predominantly contralateral response fields. The choice bias is consistent with a nonzero starting value of the diffusion process, which increases and decreases decision times for contralateral and ipsilateral choices, respectively. The nonperceptual component decreases and increases nondecision times toward contralateral and ipsilateral targets, respectively, consistent with the caudate’s role in saccade generation. The results imply a causal role for the caudate in perceptual decisions used to select saccades that may be distinct from its role in executing those saccades.

Predicting Perceptual Decision Biases from Early Brain Activity


Stefan Bode, David K. Sewell, Simon Lilburn, Jason D. Forte, Philip L. Smith, and Jutta Stahl
J. Neurosci. 2012;32 12488-12498
http://www.jneurosci.org/cgi/content/abstract/32/36/12488?etoc

Perceptual decision making is believed to be driven by the accumulation of sensory evidence following stimulus encoding. More controversially, some studies report that neural activity preceding the stimulus also affects the decision process. We used a multivariate pattern classification approach for the analysis of the human electroencephalogram (EEG) to decode choice outcomes in a perceptual decision task from spatially and temporally distributed patterns of brain signals. When stimuli provided discriminative information, choice outcomes were predicted by neural activity following stimulus encoding; when stimuli provided no discriminative information, choice outcomes were predicted by neural activity preceding the stimulus. Moreover, in the absence of discriminative information, the recent choice history primed the choices on subsequent trials. A diffusion model fitted to the choice probabilities and response time distributions showed that the starting point of the evidence accumulation process was shifted toward the previous choice, consistent with the hypothesis that choice priming biases the accumulation process toward a decision boundary. This bias is reflected in prestimulus brain activity, which, in turn, becomes predictive of future decisions. Our results provide a model of how non-stimulus-driven decision making in humans could be accomplished on a neural level.

2012年9月4日火曜日

Lesion mapping of cognitive control and value-based decision making in the prefrontal cortex


Jan Gläscher, Ralph Adolphs, Hanna Damasio, Antoine Bechara, David Rudrauf, Matthew Calamia, Lynn K. Paul, and Daniel Tranel
PNAS September 4, 2012 vol. 109 no. 36 14681-14686

A considerable body of previous research on the prefrontal cortex (PFC) has helped characterize the regional specificity of various cognitive functions, such as cognitive control and decision making. Here we provide definitive findings on this topic, using a neuropsychological approach that takes advantage of a unique dataset accrued over several decades. We applied voxel-based lesion-symptom mapping in 344 individuals with focal lesions (165 involving the PFC) who had been tested on a comprehensive battery of neuropsychological tasks. Two distinct functional-anatomical networks were revealed within the PFC: one associated with cognitive control (response inhibition, conflict monitoring, and switching), which included the dorsolateral prefrontal cortex and anterior cingulate cortex and a second associated with value-based decision-making, which included the orbitofrontal, ventromedial, and frontopolar cortex. Furthermore, cognitive control tasks shared a common performance factor related to set shifting that was linked to the rostral anterior cingulate cortex. By contrast, regions in the ventral PFC were required for decision-making. These findings provide detailed causal evidence for a remarkable functional-anatomical specificity in the human PFC.