2012年10月31日水曜日

Temporal Production Signals in Parietal Cortex


Blaine A. Schneider, Geoffrey M. Ghose
PLoS Biol 10(10): e1001413. doi:10.1371/journal.pbio.1001413

We often perform movements and actions on the basis of internal motivations and without any explicit instructions or cues. One common example of such behaviors is our ability to initiate movements solely on the basis of an internally generated sense of the passage of time. In order to isolate the neuronal signals responsible for such timed behaviors, we devised a task that requires nonhuman primates to move their eyes consistently at regular time intervals in the absence of any external stimulus events and without an immediate expectation of reward. Despite the lack of sensory information, we found that animals were remarkably precise and consistent in timed behaviors, with standard deviations on the order of 100 ms. To examine the potential neural basis of this precision, we recorded from single neurons in the lateral intraparietal area (LIP), which has been implicated in the planning and execution of eye movements. In contrast to previous studies that observed a build-up of activity associated with the passage of time, we found that LIP activity decreased at a constant rate between timed movements. Moreover, the magnitude of activity was predictive of the timing of the impending movement. Interestingly, this relationship depended on eye movement direction: activity was negatively correlated with timing when the upcoming saccade was toward the neuron's response field and positively correlated when the upcoming saccade was directed away from the response field. This suggests that LIP activity encodes timed movements in a push-pull manner by signaling for both saccade initiation towards one target and prolonged fixation for the other target. Thus timed movements in this task appear to reflect the competition between local populations of task relevant neurons rather than a global timing signal.

The Impact of the Posterior Parietal and Dorsolateral Prefrontal Cortices on the Optimization of Long-Term versus Immediate Value


Brian G. Essex, Sarah A. Clinton, Lucas R. Wonderley, and David H. Zald
J. Neurosci. 2012;32 15403-15413

異時点間の意思決定。「直近の報酬」と「額の大きい将来の報酬」のどちらを選ぶか?TMSを使って右側の「後頭頂部」もしくは「前頭前野外背側部」の活動を抑制すると、直近の報酬を選びやすくなる(報酬ではなく損失を用いても同じ傾向)。 http://www.jneurosci.org/cgi/content/abstract/32/44/15403?etoc

fMRI research suggests that both the posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC) help individuals select better long-term monetary gains during intertemporal choice. Previous neuromodulation research has demonstrated that disruption of the DLPFC interferes with this ability. However, it is unclear whether the PPC performs a similarly important function during intertemporal choice, and whether the functions performed by either region impact choices involving losses. In the current study, we used low-frequency repetitive transcranial magnetic stimulation to examine whether the PPC and DLPFC both normally facilitate selection of gains and losses with better long-term value than alternatives during intertemporal choice. We found that disruption of either region in the right hemisphere led to greater selection of both gains and losses that had better immediate, but worse long-term value than alternatives. This indicates that activity in both regions helps individuals optimize long-term value relative to immediate value in general, rather than being specific to choices involving gains. However, there were slightly different patterns of effects following disruption of the right PPC and right DLPFC, suggesting that each region may perform somewhat different functions that help optimize choice.

2012年10月30日火曜日

海馬と価値意思決定 Hippocampus and value-based decision making


記憶に関係する「海馬」が「報酬処理」にも重要だという研究。出来事記憶がそれが起こった際の情動(報酬/罰)に影響されることを考えると自然だと思うけど、これまではっきり示されていなかったらしい。

「刺激A(B)とa(b)が対呈示」、「aと報酬が対呈示」の後、A、Bのどちらかを選ぶ。ヒトはAを選ぶことが多いが、その程度は海馬のfMRI活動から予測できる。海馬は「刺激間の連合」と「刺激・報酬の連合」を結びつけるのに重要。
http://www.sciencemag.org/content/338/6104/270

ラットの海馬CA1ニューロンは「行動の価値」、「行動の結果」、「行動の報酬予測(実際に選択された行動の価値)」など「価値・意思決定」に必要な報酬情報をコードしている。
http://www.jneurosci.org/content/32/43/15053

2012年10月24日水曜日

Hippocampal Neural Correlates for Values of Experienced Events


Hyunjung Lee, Jeong-Wook Ghim, Hoseok Kim, Daeyeol Lee, and MinWhan Jung J. Neurosci. 2012;32 15053-15065
http://www.jneurosci.org/cgi/content/abstract/32/43/15053?etoc

Newly experienced events are often remembered together with how rewarding the experiences are personally. Although the hippocampus is a candidate structure where subjective values are integrated with other elements of episodic memory, it is uncertain whether and how the hippocampus processes value-related information. We examined how activity of dorsal CA1 and dorsal subicular neurons in rats performing a dynamic foraging task was related to reward values that were estimated using a reinforcement learning model. CA1 neurons carried significant signals related to action values before the animal revealed its choice behaviorally, indicating that the information on the expected values of potential choice outcomes was available in CA1. Moreover, after the outcome of the animal's goal choice was revealed, CA1 neurons carried robust signals for the value of chosen action and they temporally overlapped with the signals related to the animal's goal choice and its outcome, indicating that all the signals necessary to evaluate the outcome of an experienced event converged in CA1. On the other hand, value-related signals were substantially weaker in the subiculum. These results suggest a major role of CA1 in adding values to experienced events during episodic memory encoding. Given that CA1 neuronal activity is modulated by diverse attributes of an experienced event, CA1 might be a place where all the elements of episodic memory are integrated.

Changes in Neural Connectivity Underlie Decision Threshold Modulation for Reward Maximization


Nikos Green, Guido P. Biele, and Hauke R. Heekeren
J. Neurosci. 2012;32 14942-14950
http://www.jneurosci.org/cgi/content/abstract/32/43/14942?etoc

Using neuroimaging in combination with computational modeling, this study shows that decision threshold modulation for reward maximization is accompanied by a change in effective connectivity within corticostriatal and cerebellar–striatal brain systems. Research on perceptual decision making suggests that people make decisions by accumulating sensory evidence until a decision threshold is crossed. This threshold can be adjusted to changing circumstances, to maximize rewards. Decision making thus requires effectively managing the amount of accumulated evidence versus the amount of available time. Importantly, the neural substrate of this decision threshold modulation is unknown. Participants performed a perceptual decision-making task in blocks with identical duration but different reward schedules. Behavioral and modeling results indicate that human subjects modulated their decision threshold to maximize net reward. Neuroimaging results indicate that decision threshold modulation was achieved by adjusting effective connectivity within corticostriatal and cerebellar–striatal brain systems, the former being responsible for processing of accumulated sensory evidence and the latter being responsible for automatic, subsecond temporal processing. Participants who adjusted their threshold to a greater extent (and gained more net reward) also showed a greater modulation of effective connectivity. These results reveal a neural mechanism that underlies decision makers' abilities to adjust to changing circumstances to maximize reward.

2012年10月19日金曜日

Translating upwards: linking the neural and social sciences via neuroeconomics


Clement Levallois, John A. Clithero, Paul Wouters, Ale Smidts and Scott A. Huettel
Nature Reviews Neuroscience 13, 789-797 (November 2012) | doi:10.1038/nrn3354

The social and neural sciences share a common interest in understanding the mechanisms that underlie human behaviour. However, interactions between neuroscience and social science disciplines remain strikingly narrow and tenuous. We illustrate the scope and challenges for such interactions using the paradigmatic example of neuroeconomics. Using quantitative analyses of both its scientific literature and the social networks in its intellectual community, we show that neuroeconomics now reflects a true disciplinary integration, such that research topics and scientific communities with interdisciplinary span exert greater influence on the field. However, our analyses also reveal key structural and intellectual challenges in balancing the goals of neuroscience with those of the social sciences. To address these challenges, we offer a set of prescriptive recommendations for directing future research in neuroeconomics.

Preference by Association: How Memory Mechanisms in the Hippocampus Bias Decisions


G. Elliott Wimmer, Daphna Shohamy
Science 12 October 2012:
Vol. 338 no. 6104 pp. 270-273
DOI: 10.1126/science.1223252

Every day people make new choices between alternatives that they have never directly experienced. Yet, such decisions are often made rapidly and confidently. Here, we show that the hippocampus, traditionally known for its role in building long-term declarative memories, enables the spread of value across memories, thereby guiding decisions between new choice options. Using functional brain imaging in humans, we discovered that giving people monetary rewards led to activation of a preestablished network of memories, spreading the positive value of reward to nonrewarded items stored in memory. Later, people were biased to choose these nonrewarded items. This decision bias was predicted by activity in the hippocampus, reactivation of associated memories, and connectivity between memory and reward regions in the brain. These findings explain how choices among new alternatives emerge automatically from the associative mechanisms by which the brain builds memories. Further, our findings demonstrate a previously unknown role for the hippocampus in value-based decisions.

2012年10月17日水曜日

Selectively altering belief formation in the human brain


Tali Sharot, Ryota Kanai, David Marston, Christoph W. Korn, Geraint Rees, and Raymond J. Dolan
PNAS October 16, 2012 vol. 109 no. 42 17058-17062

Humans form beliefs asymmetrically; we tend to discount bad news but embrace good news. This reduced impact of unfavorable information on belief updating may have important societal implications, including the generation of financial market bubbles, ill preparedness in the face of natural disasters, and overly aggressive medical decisions. Here, we selectively improved people’s tendency to incorporate bad news into their beliefs by disrupting the function of the left (but not right) inferior frontal gyrus using transcranial magnetic stimulation, thereby eliminating the engrained “good news/bad news effect.” Our results provide an instance of how selective disruption of regional human brain function paradoxically enhances the ability to incorporate unfavorable information into beliefs of vulnerability.

2012年10月11日木曜日

Theory and Simulation in Neuroscience


Science 5 October 2012:
Vol. 338 no. 6103 pp. 60-65
DOI: 10.1126/science.1227356
Wulfram Gerstner, Henning Sprekeler, Gustavo Deco

Modeling work in neuroscience can be classified using two different criteria. The first one is the complexity of the model, ranging from simplified conceptual models that are amenable to mathematical analysis to detailed models that require simulations in order to understand their properties. The second criterion is that of direction of workflow, which can be from microscopic to macroscopic scales (bottom-up) or from behavioral target functions to properties of components (top-down). We review the interaction of theory and simulation using examples of top-down and bottom-up studies and point to some current developments in the fields of computational and theoretical neuroscience.

2012年10月9日火曜日

Sensitivity to Temporal Reward Structure in Amygdala Neurons


Maria A. Bermudez, Carl Göbel, Wolfram Schultz
Current Biology, Volume 22, Issue 19, 1839-1844, 06 September 2012

The time of reward and the temporal structure of reward occurrence fundamentally influence behavioral reinforcement and decision processes [1,2,3,4,5,6,7,8,9,10,11]. However, despite knowledge about timing in sensory and motor systems [12,13,14,15,16,17], we know little about temporal mechanisms of neuronal reward processing. In this experiment, visual stimuli predicted different instantaneous probabilities of reward occurrence that resulted in specific temporal reward structures. Licking behavior demonstrated that the animals had developed expectations for the time of reward that reflected the instantaneous reward probabilities. Neurons in the amygdala, a major component of the brain's reward system [18,19,20,21,22,23,24,25,26,27,28,29], showed two types of reward signal, both of which were sensitive to the expected time of reward. First, the time courses of anticipatory activity preceding reward delivery followed the specific instantaneous reward probabilities and thus paralleled the temporal reward structures. Second, the magnitudes of responses following reward delivery covaried with the instantaneous reward probabilities, reflecting the influence of temporal reward structures at the moment of reward delivery. In being sensitive to temporal reward structure, the reward signals of amygdala neurons reflected the temporally specific expectations of reward. The data demonstrate an active involvement of amygdala neurons in timing processes that are crucial for reward function.

Generalized Perceptual Learning in the Absence of Sensory Adaptation


Hila Harris, Michael Gliksberg, Dov Sagi
Current Biology, Volume 22, Issue 19, 1813-1817, 23 August 2012

Repeated performance of visual tasks leads to long-lasting increased sensitivity to the trained stimulus, a phenomenon termed perceptual learning. A ubiquitous property of visual learning is specificity: performance improvement obtained during training applies only for the trained stimulus features, which are thought to be encoded in sensory brain regions [1,2,3]. However, recent results show performance decrements with an increasing number of trials within a training session [4,5]. This selective sensitivity reduction is thought to arise due to sensory adaptation [5,6]. Here we show, using the standard texture discrimination task [7], that location specificity is a consequence of sensory adaptation; that is, it results from selective reduced sensitivity due to repeated stimulation. Observers practiced the texture task with the target presented at a fixed location within a background texture. To remove adaptation, we added task-irrelevant (“dummy”) trials with the texture oriented 45° relative to the target’s orientation, known to counteract adaptation [8]. The results indicate location specificity with the standard paradigm, but complete generalization to a new location when adaptation is removed. We suggest that adaptation interferes with invariant pattern-discrimination learning by inducing network-dependent changes in local visual representations.

2012年10月4日木曜日

Network Resets in Medial Prefrontal Cortex Mark the Onset of Behavioral Uncertainty


Mattias P. Karlsson, Dougal G. R. Tervo, Alla Y. Karpova
Science 5 October 2012: Vol. 338 no. 6103 pp. 135-139

Regions within the prefrontal cortex are thought to process beliefs about the world, but little is known about the circuit dynamics underlying the formation and modification of these beliefs. Using a task that permits dissociation between the activity encoding an animal’s internal state and that encoding aspects of behavior, we found that transient increases in the volatility of activity in the rat medial prefrontal cortex accompany periods when an animal’s belief is modified after an environmental change. Activity across the majority of sampled neurons underwent marked, abrupt, and coordinated changes when prior belief was abandoned in favor of exploration of alternative strategies. These dynamics reflect network switches to a state of instability, which diminishes over the period of exploration as new stable representations are formed.

In Monkeys Making Value-Based Decisions, LIP Neurons Encode Cue Salience and Not Action Value


Marvin L. Leathers, Carl R. Olson
Science 5 October 2012: Vol. 338 no. 6103 pp. 132-135

LIPニューロンの活動は「報酬の量」にも「罰の量」にも正の相関がある。つまり、価値ではなく(価値なら、罰の量には負相関するはず)、Salience(顕著性)をコード。で、顕著性は意思決定にどう効いてくるのだろう?やっぱり学習率なのかな?

In monkeys deciding between alternative saccadic eye movements, lateral intraparietal (LIP) neurons representing each saccade fire at a rate proportional to the value of the reward expected upon its completion. This observation has been interpreted as indicating that LIP neurons encode saccadic value and that they mediate value-based decisions between saccades. Here, we show that LIP neurons representing a given saccade fire strongly not only if it will yield a large reward but also if it will incur a large penalty. This finding indicates that LIP neurons are sensitive to the motivational salience of cues. It is compatible neither with the idea that LIP neurons represent action value nor with the idea that value-based decisions take place in LIP neurons.

2012年10月3日水曜日

Hard to “tune in”: neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder


Hiroki C. Tanabe, Hirotaka Kosaka, Daisuke N. Saito, Takahiko Koike, Masamichi J. Hayashi, Keise Izuma, Hidetsugu Komeda, Makoto Ishitobi, Masao Omori, Toshio Munesue, Hidehiko Okazawa, Yuji Wada, and Norihiro Sadato
Front. Hum. Neurosci. 6:268. doi: 10.3389/fnhum.2012.00268

Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner.

Impaired Learning of Social Compared to Monetary Rewards in Autism


Alice Lin, Antonio Rangel, and Ralph Adolphs
Front. Neurosci. 6:143. doi: 10.3389/fnins.2012.00143

A leading hypothesis to explain the social dysfunction in people with autism spectrum disorders (ASD) is that they exhibit a deficit in reward processing and motivation specific to social stimuli. However, there have been few direct tests of this hypothesis to date. Here we used an instrumental reward learning task that contrasted learning with social rewards (pictures of positive and negative faces) against learning with monetary reward (winning and losing money). The two tasks were structurally identical except for the type of reward, permitting direct comparisons. We tested 10 high-functioning people with ASD (7M, 3F) and 10 healthy controls who were matched on gender, age, and education. We found no significant differences between the two groups in terms of overall ability behaviorally to discriminate positive from negative slot machines, reaction-times, and valence ratings, However, there was a specific impairment in the ASD group in learning to choose social rewards, compared to monetary rewards: they had a significantly lower cumulative number of choices of the most rewarding social slot machine, and had a significantly slower initial learning rate for the socially rewarding slot machine, compared to the controls. The findings show a deficit in reward learning in ASD that is greater for social rewards than for monetary rewards, and support the hypothesis of a disproportionate impairment in social reward processing in ASD.

A computational approach to “free will” constrained by the games we play


Kenneth T. Kishida
Front. Integr. Neurosci. 6:85. doi: 10.3389/fnint.2012.00085

Human choice is not free—we are bounded by a multitude of biological constraints. Yet, within the various landscapes we face, we do express choice, preference, and varying degrees of so-called willful behavior. Moreover, it appears that the capacity for choice in humans is variable. Empirical studies aimed at investigating the experience of “free will” will benefit from theoretical disciplines that constrain the language used to frame the relevant issues. The combination of game theory and computational reinforcement learning theory with empirical methods is already beginning to provide valuable insight into the biological variables underlying capacity for choice in humans and how things may go awry in individuals with brain disorders. These disciplines operate within abstract quantitative landscapes, but have successfully been applied to investigate strategic and adaptive human choice guided by formal notions of optimal behavior. Psychiatric illness is an extreme, but interesting arena for studying human capacity for choice. The experiences and behaviors of patients suggest these individuals fundamentally suffer from a diminished capacity of willful choice. Herein, I will briefly discuss recent applications of computationally guided approaches to human choice behavior and the underlying neurobiology. These approaches can be integrated into empirical investigation at multiple temporal scales of analysis including the growing body of experiments in human functional magnetic resonance imaging (fMRI), and newly emerging sub-second electrochemical and electrophysiological measurements in the human brain. These cross-disciplinary approaches hold promise for revealing the underlying neurobiological mechanisms for the variety of choice capacity in humans.

Twenty-Five Lessons from Computational Neuromodulation


Peter Dayan
Neuron, Volume 76, Issue 1, 240-256, 4 October 2012

Neural processing faces three rather different, and perniciously tied, communication problems. First, computation is radically distributed, yet point-to-point interconnections are limited. Second, the bulk of these connections are semantically uniform, lacking differentiation at their targets that could tag particular sorts of information. Third, the brain's structure is relatively fixed, and yet different sorts of input, forms of processing, and rules for determining the output are appropriate under different, and possibly rapidly changing, conditions. Neuromodulators address these problems by their multifarious and broad distribution, by enjoying specialized receptor types in partially specific anatomical arrangements, and by their ability to mold the activity and sensitivity of neurons and the strength and plasticity of their synapses. Here, I offer a computationally focused review of algorithmic and implementational motifs associated with neuromodulators, using decision making in the face of uncertainty as a running example.

Effects of Decision Variables and Intraparietal Stimulation on Sensorimotor Oscillatory Activity in the Human Brain


Ian C. Gould, Anna C. Nobre, Valentin Wyart, and Matthew F. S. Rushworth
J. Neurosci. 2012;32 13805-13818
http://www.jneurosci.org/cgi/content/abstract/32/40/13805?etoc

To decide effectively, information must not only be integrated from multiple sources, but it must be distributed across the brain if it is to influence structures such as motor cortex that execute choices. Human participants integrated information from multiple, but only partially informative, cues in a probabilistic reasoning task in an optimal manner. We tested whether lateralization of alpha- and beta-band oscillatory brain activity over sensorimotor cortex reflected decision variables such as the sum of the evidence provided by observed cues, a key quantity for decision making, and whether this could be dissociated from an update signal reflecting processing of the most recent cue stimulus. Alpha- and beta-band activity in the electroencephalogram reflected the logarithm of the likelihood ratio associated with the each piece of information witnessed, and the same quantity associated with the previous cues. Only the beta-band, however, reflected the most recent cue in a manner that suggested it reflected updating processes associated with cue processing. In a second experiment, transcranial magnetic stimulation-induced disruption was used to demonstrate that the intraparietal sulcus played a causal role both in decision making and in the appearance of sensorimotor beta-band activity.