Temporal Characteristics of the Influence of Punishment on Perceptual Decision Making in the Human Brain
Helen Blank, Guido Biele, Hauke R. Heekeren, and Marios G. Philiastides
J. Neurosci. 2013;33 3939-3952
Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.
Justin M. Moscarello and Joseph E. LeDoux
J. Neurosci. 2013;33 3815-3823
Signaled active avoidance (AA) paradigms train subjects to prevent an aversive outcome by performing a learned behavior during the presentation of a conditioned cue. This complex form of conditioning involves pavlovian and instrumental components, which produce competing behavioral responses that must be reconciled for the subject to successfully avoid an aversive stimulus. In signaled AA paradigm for rat, we tested the hypothesis that the instrumental component of AA training recruits infralimbic prefrontal cortex (ilPFC) to inhibit central amygdala (CeA)-mediated Pavlovian reactions. Pretraining lesions of ilPFC increased conditioned freezing while causing a corresponding decrease in avoidance; lesions of CeA produced opposite effects, reducing freezing and facilitating avoidance behavior. Pharmacological inactivation experiments demonstrated that ilPFC is relevant to both acquisition and expression phases of AA learning. Inactivation experiments also revealed that AA produces an ilPFC-mediated diminution of pavlovian reactions that extends beyond the training context, even when the conditioned stimulus is presented in an environment that does not allow the avoidance response. Finally, injection of a protein synthesis inhibitor into either ilPFC or CeA impaired or facilitated AA, respectively, showing that avoidance training produces two opposing memory traces in these regions. These data support a model in which AA learning recruits ilPFC to inhibit CeA-mediated defense behaviors, leading to a robust suppression of freezing that generalizes across environments. Thus, ilPFC functions as an inhibitory interface, allowing instrumental control over an aversive outcome to attenuate the expression of freezing and other reactions to conditioned threat.
Susan Sangha, James Z. Chadick, and Patricia H. Janak
J. Neurosci. 2013;33 3744-3751
Learning to fear and avoid life-threatening stimuli are critical survival skills but are maladaptive when they persist in the absence of a direct threat. Thus, it is important to detect when a situation is safe and to increase behaviors leading to naturally rewarding actions, such as feeding and mating. It is unclear how the brain distinguishes between dangerous and safe situations. Here, we present a novel protocol designed to investigate the processing of cues that predict danger, safety, or reward (sucrose). In vivo single unit recordings were obtained in the basal amygdala of freely behaving rats undergoing simultaneous reward, fear, and safety conditioning. We observed a population of neurons that did not respond to a Fear Cue but did change their firing rate during the combined presentation of a fear cue simultaneous with a second, safety, cue; this combination of Fear + Safety Cues signified “no shock.” This neural population consisted of two subpopulations: neurons that responded to the Fear + Safety Cue but not the Fear or Reward Cue (“safety” neurons), and neurons that responded to the Fear + Safety and Reward Cue but not the Fear Cue (“safety + reward” neurons). These data demonstrate the presence of neurons in the basal amygdala that are selectively responsive to Safety Cues. Furthermore, these data suggest that safety and reward learning use overlapping mechanisms in the basal amygdala.
Understanding Others' Feelings: The Role of the Right Primary Somatosensory Cortex in Encoding the Affective Valence of Others' Touch
Nadia Bolognini, Angela Rossetti, Silvia Convento, and Giuseppe Vallar
J. Neurosci. 2013;33 4201-4205
Brain imaging studies in humans have shown the existence of a shared somatosensory representation in the primary somatosensory cortex (S1), putatively involved in understanding others' sensations (Keysers et al., 2010); however, the role of S1 in such a high-level process is still unknown. To ascertain the causal involvement of S1, and its possible hemispheric lateralization, in encoding the affective valence of emotional scenes, depicting, or not, a tactile event, we gave to healthy participants a picture-based affective go/no-go task and low-frequency repetitive transcranial magnetic stimulation (rTMS). The dorsolateral prefrontal cortex (DLPFC) was chosen as control site. rTMS over the right, but not the left, S1 selectively increased the participants' latencies in the affective go/no-go task, but only when the affective state was conveyed by touch; intriguingly, this interfering effect was associated with the empathic ability to adopt the subjective perspective of others. The left, not the right, DLPFC is also involved in affective go/no-go performance, but regardless of the sight of touch, and independently of empathic abilities. This novel evidence demonstrates the crossmodal role of right S1 in encoding the pleasant and aversive consequences of others' sensations evoked by touch.
Alicia Izquierdo, Chelsi Darling, Nic Manos, Hilda Pozos, Charissa Kim, Serena Ostrander, Victor Cazares, Haley Stepp, and Peter H. Rudebeck
J. Neurosci. 2013;33 4105-4109
The orbitofrontal cortex (OFC) and basolateral amygdala (BLA) constitute part of a neural circuit important for adaptive, goal-directed learning. One task measuring flexibility of response to changes in reward is discrimination reversal learning. Damage to OFC produces well documented impairments on various forms of reversal learning in rodents, monkeys, and humans. Recent reports show that BLA, though highly interconnected with OFC, may be differentially involved in reversal learning. In the present experiment, we compared the effects of bilateral, ibotenic acid lesions of OFC or BLA (or SHAM) on visual discrimination and reversal learning. Specifically, we used pairwise visual discrimination methods, as is commonly administered in non-human primate studies, and analyzed how animals use positive and negative trial-by-trial feedback, domains not previously explored in a rat study. As expected, OFC lesions displayed significantly slower reversal learning than SHAM and BLA rats across sessions. Rats with BLA lesions, conversely, showed facilitated reversal learning relative to SHAM and OFC groups. Furthermore, a trial-by-trial analysis of the errors committed showed the BLA group benefited more from incorrectly performed trials (or negative feedback) on future choices than either SHAM or OFC rats. This provides evidence that BLA and OFC are involved in updating responses to changes in reward contingency and that the roles are distinct. Our results are discussed in relation to a competitive framework model for OFC and BLA in reward processing.
Dissociable dopaminergic control of saccadic target selection and its implications for reward modulation
Alireza Soltani, Behrad Noudoost, and Tirin Moore
PNAS February 26, 2013 vol. 110 no. 9 3579-3584
To investigate mechanisms by which reward modulates target selection, we studied the behavioral effects of perturbing dopaminergic activity within the frontal eye field (FEF) of monkeys performing a saccadic choice task and simulated the effects using a plausible cortical network. We found that manipulation of FEF activity either by blocking D1 receptors (D1Rs) or by stimulating D2 receptors (D2Rs) increased the tendency to choose targets in the response field of the affected site. However, the D1R manipulation decreased the tendency to repeat choices on subsequent trials, whereas the D2R manipulation increased that tendency. Moreover, the amount of shift in target selection resulting from the two manipulations correlated in opposite ways with the baseline stochasticity of choice behavior. Our network simulation results suggest that D1Rs influence target selection mainly through their effects on the strength of inputs to the FEF and on recurrent connectivity, whereas D2Rs influence the excitability of FEF output neurons. Altogether, these results reveal dissociable dopaminergic mechanisms influencing target selection and suggest how reward can influence adaptive choice behavior via prefrontal dopamine.
Elsa Fouragnan, Gabriele Chierchia, Susanne Greiner, Remi Neveu, Paolo Avesani, and Giorgio Coricelli
The Journal of Neuroscience, 20 February 2013, 33(8):3602-3611; doi:10.1523/JNEUROSCI.3086-12.2013
Humans learn to trust each other by evaluating the outcomes of repeated interpersonal interactions. However, available prior information on the reputation of traders may alter the way outcomes affect learning. Our functional magnetic resonance imaging study is the first to allow the direct comparison of interaction-based and prior-based learning. Twenty participants played repeated trust games with anonymous counterparts. We manipulated two experimental conditions: whether or not reputational priors were provided, and whether counterparts were generally trustworthy or untrustworthy. When no prior information is available our results are consistent with previous studies in showing that striatal activation patterns correlate with behaviorally estimated reinforcement learning measures. However, our study additionally shows that this correlation is disrupted when reputational priors on counterparts are provided. Indeed participants continue to rely on priors even when experience sheds doubt on their accuracy. Notably, violations of trust from a cooperative counterpart elicited stronger caudate deactivations when priors were available than when they were not. However, tolerance to such violations appeared to be mediated by prior-enhanced connectivity between the caudate nucleus and ventrolateral prefrontal cortex, which anticorrelated with retaliation rates. Moreover, on top of affecting learning mechanisms, priors also clearly oriented initial decisions to trust, reflected in medial prefrontal cortex activity.
Molly J. Crockett, Annemieke Apergis-Schoute, Benedikt Herrmann, Matt Lieberman, Ulrich Müller, Trevor W. Robbins, and Luke Clark
The Journal of Neuroscience, 20 February 2013, 33(8):3505-3513; doi:10.1523/JNEUROSCI.2761-12.2013
Humans are willing to incur personal costs to punish others who violate social norms. Such “costly punishment” is an important force for sustaining human cooperation, but the causal neurobiological determinants of punishment decisions remain unclear. Using a combination of behavioral, pharmacological, and neuroimaging techniques, we show that manipulating the serotonin system in humans alters costly punishment decisions by modulating responses to fairness and retaliation in the striatum. Following dietary depletion of the serotonin precursor tryptophan, participants were more likely to punish those who treated them unfairly, and were slower to accept fair exchanges. Neuroimaging data revealed activations in the ventral and dorsal striatum that were associated with fairness and punishment, respectively. Depletion simultaneously reduced ventral striatal responses to fairness and increased dorsal striatal responses during punishment, an effect that predicted its influence on punishment behavior. Finally, we provide behavioral evidence that serotonin modulates specific retaliation, rather than general norm enforcement: depleted participants were more likely to punish unfair behavior directed toward themselves, but not unfair behavior directed toward others. Our findings demonstrate that serotonin modulates social value processing in the striatum, producing context-dependent effects on social behavior.
Jeremy J. Clark, Anne L. Collins, Christina Akers Sanford, and Paul E. M. Phillips
The Journal of Neuroscience, 20 February 2013, 33(8):3526-3532; doi:10.1523/JNEUROSCI.5119-12.2013
Dopamine is highly implicated both as a teaching signal in reinforcement learning and in motivating actions to obtain rewards. However, theoretical disconnects remain between the temporal encoding properties of dopamine neurons and the behavioral consequences of its release. Here, we demonstrate in rats that dopamine evoked by pavlovian cues increases during acquisition, but dissociates from stable conditioned appetitive behavior as this signal returns to preconditioning levels with extended training. Experimental manipulation of the statistical parameters of the behavioral paradigm revealed that this attenuation of cue-evoked dopamine release during the postasymptotic period was attributable to acquired knowledge of the temporal structure of the task. In parallel, conditioned behavior became less dopamine dependent after extended training. Thus, the current work demonstrates that as the presentation of reward-predictive stimuli becomes anticipated through the acquisition of task information, there is a shift in the neurobiological substrates that mediate the motivational properties of these incentive stimuli.
Neurons in Monkey Dorsal Raphe Nucleus Code Beginning and Progress of Step-by-Step Schedule, Reward Expectation, and Amount of Reward Outcome in the Reward Schedule Task
Kiyonori Inaba, Takashi Mizuhiki, Tsuyoshi Setogawa, Koji Toda, Barry J. Richmond, and Munetaka Shidara
The Journal of Neuroscience, 20 February 2013, 33(8):3477-3491; doi:10.1523/JNEUROSCI.4388-12.2013
The dorsal raphe nucleus is the major source of serotonin in the brain. It is connected to brain regions related to reward processing, and the neurons show activity related to predicted reward outcome. Clinical observations also suggest that it is important in maintaining alertness and its apparent role in addiction seems to be related to reward processing. Here, we examined whether the neurons in dorsal raphe carry signals about reward outcome and task progress during multitrial schedules. We recorded from 98 single neurons in dorsal raphe of two monkeys. The monkeys perform one, two, or three visual discrimination trials (schedule), obtaining one, two, or three drops of liquid. In the valid cue condition, the length and brightness of a visual cue indicated schedule progress and reward amount, respectively. In the random cue condition, the visual cue was randomly presented with respect to schedule length and reward amount. We found information encoded about (1) schedule onset, (2) reward expectation, (3) reward outcome, and (4) reward amount in the mean firing rates. Information theoretic analysis showed that the temporal variation of the neuronal responses contained additional information related to the progress of the schedule toward the reward rather than only discriminating schedule onset or reward/no reward. When considered in light of all that is known about the raphe in anatomy, physiology, and behavior, the rich encoding about both task progress and predicted reward outcome makes the raphe a strong candidate for providing signals throughout the brain to coordinate persistent goal-seeking behavior.
Differential Effects of Amygdala, Orbital Prefrontal Cortex, and Prelimbic Cortex Lesions on Goal-Directed Behavior in Rhesus Macaques
Sarah E. V. Rhodes and Elisabeth A. Murray
The Journal of Neuroscience, 20 February 2013, 33(8):3380-3389; doi:10.1523/JNEUROSCI.4374-12.2013
We assessed the involvement of the orbital prefrontal cortex (PFo), the prelimbic region of the medial prefrontal cortex (PL), and the amygdala in goal-directed behavior. Rhesus monkeys were trained on a task in which two different instrumental responses were linked to two different outcomes. One response, called “tap,” required the monkeys to repeatedly touch a colored square on a video monitor to produce one kind of food reward. The other response, called “hold,” required persistent contact of an identical stimulus, and it produced a different kind of food reward. After training, we assessed the effects of sensory-specific reinforcer devaluation as a way to probe each monkey's use of goal-directed behavior. In this procedure, monkeys were allowed to consume one of the two foods to satiety and were then tested for tap/hold preference under extinction. Unoperated control monkeys showed a reduction in the response associated with obtaining the devalued food, called the “devaluation effect,” a hallmark of goal-directed behavior. Monkeys with bilateral lesions of PFo or the amygdala exhibited significantly reduced devaluation effects. Results from monkeys with PL lesions were equivocal. We conclude that both PFo and the amygdala play a significant role in goal-directed behavior in monkeys. Notably, the findings for PFo challenge the idea that orbital and medial prefrontal regions are exclusively dedicated to object- and action-based processes, respectively.
Category-dependent and category-independent goal-value codes in human ventromedial prefrontal cortex
Daniel McNamee, Antonio Rangel & John P O'Doherty
Nature Neuroscience (2013) doi:10.1038/nn.3337
Received 03 December 2012 Accepted 24 January 2013 Published online 17 February 2013
O'DohertyラボからNature Neurosci. 異なる財の間で選択を行うためには、財の種類に依らず価値（効用）を表象している脳部位が必要。本当に存在するのかfMRI Multi Voxel Pattern Analysisで検証。前頭前野内側部に存在する。
To choose between manifestly distinct options, it is suggested that the brain assigns values to goals using a common currency. Although previous studies have reported activity in ventromedial prefrontal cortex (vmPFC) correlating with the value of different goal stimuli, it remains unclear whether such goal-value representations are independent of the associated stimulus categorization, as required by a common currency. Using multivoxel pattern analyses on functional magnetic resonance imaging (fMRI) data, we found a region of medial prefrontal cortex to contain a distributed goal-value code that is independent of stimulus category. More ventrally in the vmPFC, we found spatially distinct areas of the medial orbitofrontal cortex to contain unique category-dependent distributed value codes for food and consumer items. These results implicate the medial prefrontal cortex in the implementation of a common currency and suggest a ventral versus dorsal topographical organization of value signals in the vmPFC.
Ganesh Vigneswaran, Roland Philipp, Roger N. Lemon, Alexander Kraskov
Current Biology, Volume 23, Issue 3, 236-243, 03 January 2013
Evidence is accumulating that neurons in primary motor cortex (M1) respond during action observation [1,2], a property first shown for mirror neurons in monkey premotor cortex . We now show for the first time that the discharge of a major class of M1 output neuron, the pyramidal tract neuron (PTN), is modulated during observation of precision grip by a human experimenter. We recorded 132 PTNs in the hand area of two adult macaques, of which 65 (49%) showed mirror-like activity. Many (38 of 65) increased their discharge during observation (facilitation-type mirror neuron), but a substantial number (27 of 65) exhibited reduced discharge or stopped firing (suppression-type). Simultaneous recordings from arm, hand, and digit muscles confirmed the complete absence of detectable muscle activity during observation. We compared the discharge of the same population of neurons during active grasp by the monkeys. We found that facilitation neurons were only half as active for action observation as for action execution, and that suppression neurons reversed their activity pattern and were actually facilitated during execution. Thus, although many M1 output neurons are active during action observation, M1 direct input to spinal circuitry is either reduced or abolished and may not be sufficient to produce overt muscle activity.
Segregated Encoding of Reward–Identity and Stimulus–Reward Associations in Human Orbitofrontal Cortex
The Journal of Neuroscience, 13 February 2013, 33(7):3202-3211;
Miriam Cornelia Klein-Flügge, Helen Catharine Barron, Kay Henning Brodersen, Raymond J. Dolan, and Timothy Edward John Behrens
A dominant focus in studies of learning and decision-making is the neural coding of scalar reward value. This emphasis ignores the fact that choices are strongly shaped by a rich representation of potential rewards. Here, using fMRI adaptation, we demonstrate that responses in the human orbitofrontal cortex (OFC) encode a representation of the specific type of food reward predicted by a visual cue. By controlling for value across rewards and by linking each reward with two distinct stimuli, we could test for representations of reward–identity that were independent of associative information. Our results show reward–identity representations in a medial-caudal region of OFC, independent of the associated predictive stimulus. This contrasts with a more rostro-lateral OFC region encoding reward–identity representations tied to the predicate stimulus. This demonstration of adaptation in OFC to reward specific representations opens an avenue for investigation of more complex decision mechanisms that are not immediately accessible in standard analyses, which focus on correlates of average activity.
Florent Meyniel, Claire Sergent, Lionel Rigoux, Jean Daunizeau, and Mathias Pessiglione
PNAS February 12, 2013 vol. 110 no. 7 2641-2646
No pain, no gain: cost–benefit trade-off has been formalized in classical decision theory to account for how we choose whether to engage effort. However, how the brain decides when to have breaks in the course of effort production remains poorly understood. We propose that decisions to cease and resume work are triggered by a cost evidence accumulation signal reaching upper and lower bounds, respectively. We developed a task in which participants are free to exert a physical effort knowing that their payoff would be proportional to their effort duration. Functional MRI and magnetoencephalography recordings conjointly revealed that the theoretical cost evidence accumulation signal was expressed in proprioceptive regions (bilateral posterior insula). Furthermore, the slopes and bounds of the accumulation process were adapted to the difficulty of the task and the money at stake. Cost evidence accumulation might therefore provide a dynamical mechanistic account of how the human brain maximizes benefits while preventing exhaustion.
David G. Rand, Corina E. Tarnita, Hisashi Ohtsuki, and Martin A. Nowak
PNAS February 12, 2013 vol. 110 no. 7 2581-2586
Classical economic models assume that people are fully rational and selfish, while experiments often point to different conclusions. A canonical example is the Ultimatum Game: one player proposes a division of a sum of money between herself and a second player, who either accepts or rejects. Based on rational self-interest, responders should accept any nonzero offer and proposers should offer the smallest possible amount. Traditional, deterministic models of evolutionary game theory agree: in the one-shot anonymous Ultimatum Game, natural selection favors low offers and demands. Experiments instead show a preference for fairness: often responders reject low offers and proposers make higher offers than needed to avoid rejection. Here we show that using stochastic evolutionary game theory, where agents make mistakes when judging the payoffs and strategies of others, natural selection favors fairness. Across a range of parameters, the average strategy matches the observed behavior: proposers offer between 30% and 50%, and responders demand between 25% and 40%. Rejecting low offers increases relative payoff in pairwise competition between two strategies and is favored when selection is sufficiently weak. Offering more than you demand increases payoff when many strategies are present simultaneously and is favored when mutation is sufficiently high. We also perform a behavioral experiment and find empirical support for these theoretical findings: uncertainty about the success of others is associated with higher demands and offers; and inconsistency in the behavior of others is associated with higher offers but not predictive of demands. In an uncertain world, fairness finishes first.
Jack van Honk, Christoph Eisenegger, David Terburg, Dan J. Stein, and Barak Morgan
PNAS February 12, 2013 vol. 110 no. 7 2506-2510
Contemporary economic models hold that instrumental and impulsive behaviors underlie human social decision making. The amygdala is assumed to be involved in social-economic behavior, but its role in human behavior is poorly understood. Rodent research suggests that the basolateral amygdala (BLA) subserves instrumental behaviors and regulates the central-medial amygdala, which subserves impulsive behaviors. The human amygdala, however, typically is investigated as a single unit. If these rodent data could be translated to humans, selective dysfunction of the human BLA might constrain instrumental social-economic decisions and result in more impulsive social-economic choice behavior. Here we show that humans with selective BLA damage and a functional central-medial amygdala invest nearly 100% more money in unfamiliar others in a trust game than do healthy controls. We furthermore show that this generosity is not caused by risk-taking deviations in nonsocial contexts. Moreover, these BLA-damaged subjects do not expect higher returns or perceive people as more trustworthy, implying that their generous investments are not instrumental in nature. These findings suggest that the human BLA is essential for instrumental behaviors in social-economic interactions.
Ventromedial Prefrontal and Anterior Cingulate Cortex Adopt Choice and Default Reference Frames during Sequential Multi-Alternative Choice
Erie D. Boorman, Matthew F. Rushworth, and Tim E. Behrens
The Journal of Neuroscience, 6 February 2013, 33(6):2242-2253; doi:10.1523/JNEUROSCI.3022-12.2013
Although damage to the medial frontal cortex causes profound decision-making impairments, it has been difficult to pinpoint the relative contributions of key anatomical subdivisions. Here we use function magnetic resonance imaging to examine the contributions of human ventromedial prefrontal cortex (vmPFC) and dorsal anterior cingulate cortex (dACC) during sequential choices between multiple alternatives—two key features of choices made in ecological settings. By carefully constructing options whose current value at any given decision was dissociable from their longer term value, we were able to examine choices in current and long-term frames of reference. We present evidence showing that activity at choice and feedback in vmPFC and dACC was tied to the current choice and the best long-term option, respectively. vmPFC, mid-cingulate, and posterior cingulate cortex encoded the relative value between the chosen and next best option at each sequential decision, whereas dACC encoded the relative value of adapting choices from the option with the highest value in the longer term. Furthermore, at feedback we identify temporally dissociable effects that predict repetition of the current choice and adaptation away from the long-term best option in vmPFC and dACC, respectively. These functional dissociations at choice and feedback suggest that sequential choices are subject to competing cortical mechanisms.
Signal Multiplexing and Single-Neuron Computations in Lateral Intraparietal Area During Decision-Making
Miriam L. R. Meister, Jay A. Hennig, and Alexander C. Huk
The Journal of Neuroscience, 6 February 2013, 33(6):2254-2267; doi:10.1523/JNEUROSCI.2984-12.2013
Previous work has revealed a remarkably direct neural correlate of decisions in the lateral intraparietal area (LIP). Specifically, firing rate has been observed to ramp up or down in a manner resembling the accumulation of evidence for a perceptual decision reported by making a saccade into (or away from) the neuron's response field (RF). However, this link between LIP response and decision formation emerged from studies where a saccadic target was always stimulating the RF during decisions, and where the neural correlate was the averaged activity of a restricted sample of neurons. Because LIP cells are (1) highly responsive to the presence of a visual stimulus in the RF, (2) heterogeneous, and (3) not clearly anatomically segregated from large numbers of neurons that fail selection criteria, the underlying neuronal computations are potentially obscured. To address this, we recorded single neuron spiking activity in LIP during a well-studied moving-dot direction–discrimination task and manipulated whether a saccade target was present in the RF during decision-making. We also recorded from a broad sample of LIP neurons, including ones conventionally excluded in prior studies. Our results show that cells multiplex decision signals with decision-irrelevant visual signals. We also observed disparate, repeating response “motifs” across neurons that, when averaged together, resemble traditional ramping decision signals. In sum, neural responses in LIP simultaneously carry decision signals and decision-irrelevant sensory signals while exhibiting diverse dynamics that reveal a broader range of neural computations than previously entertained.
Martin Rolfs, Michael Dambacher, Patrick Cavanagh
Current Biology, Volume 23, Issue 3, 250-254, 10 January 2013
We easily recover the causal properties of visual events, enabling us to understand and predict changes in the physical world. We see a tennis racket hitting a ball and sense that it caused the ball to fly over the net; we may also have an eerie but equally compelling experience of causality if the streetlights turn on just as we slam our car’s door. Both perceptual  and cognitive  processes have been proposed to explain these spontaneous inferences, but without decisive evidence one way or the other, the question remains wide open [3,4,5,6,7,8]. Here, we address this long-standing debate using visual adaptation—a powerful tool to uncover neural populations that specialize in the analysis of specific visual features [9,10,11,12]. After prolonged viewing of causal collision events called “launches” , subsequently viewed events were judged more often as noncausal. These negative aftereffects of exposure to collisions are spatially localized in retinotopic coordinates, the reference frame shared by the retina and visual cortex. They are not explained by adaptation to other stimulus features and reveal visual routines in retinotopic cortex that detect and adapt to cause and effect in simple collision stimuli.
Christopher J Peck, Brian Lau & C Daniel Salzman
Nature Neuroscience (2013) doi:10.1038/nn.3328
Received 21 September 2012; Accepted 08 January 2013; Published online 03 February 2013.
A stimulus predicting reinforcement can trigger emotional responses, such as arousal, and cognitive ones, such as increased attention toward the stimulus. Neuroscientists have long appreciated that the amygdala mediates spatially nonspecific emotional responses, but it remains unclear whether the amygdala links motivational and spatial representations. To test whether amygdala neurons encode spatial and motivational information, we presented reward-predictive cues in different spatial configurations to monkeys and assessed how these cues influenced spatial attention. Cue configuration and predicted reward magnitude modulated amygdala neural activity in a coordinated fashion. Moreover, fluctuations in activity were correlated with trial-to-trial variability in spatial attention. Thus, the amygdala integrates spatial and motivational information, which may influence the spatial allocation of cognitive resources. These results suggest that amygdala dysfunction may contribute to deficits in cognitive processes normally coordinated with emotional responses, such as the directing of attention toward the location of emotionally relevant stimuli.