Academic Appointments


Professional Education


  • PhD, University of California, Berkeley and UCSF, Bioengineering (2002)
  • BS, Yale University, Computer Science (1993)

Current Research and Scholarly Interests


How does neural activity in the human cortex create our sense of visual perception? We use a combination of functional magnetic resonance imaging, computational modeling and analysis, and psychophysical measurements to link human perception to cortical brain activity.

Stanford Advisees


All Publications


  • Inverted Encoding Models of Human Population Response Conflate Noise and Neural Tuning Width. The Journal of neuroscience : the official journal of the Society for Neuroscience Liu, T., Cable, D., Gardner, J. L. 2018; 38 (2): 398–408

    Abstract

    Channel-encoding models offer the ability to bridge different scales of neuronal measurement by interpreting population responses, typically measured with BOLD imaging in humans, as linear sums of groups of neurons (channels) tuned for visual stimulus properties. Inverting these models to form predicted channel responses from population measurements in humans seemingly offers the potential to infer neuronal tuning properties. Here, we test the ability to make inferences about neural tuning width from inverted encoding models. We examined contrast invariance of orientation selectivity in human V1 (both sexes) and found that inverting the encoding model resulted in channel response functions that became broader with lower contrast, thus apparently violating contrast invariance. Simulations showed that this broadening could be explained by contrast-invariant single-unit tuning with the measured decrease in response amplitude at lower contrast. The decrease in response lowers the signal-to-noise ratio of population responses that results in poorer population representation of orientation. Simulations further showed that increasing signal to noise makes channel response functions less sensitive to underlying neural tuning width, and in the limit of zero noise will reconstruct the channel function assumed by the model regardless of the bandwidth of single units. We conclude that our data are consistent with contrast-invariant orientation tuning in human V1. More generally, our results demonstrate that population selectivity measures obtained by encoding models can deviate substantially from the behavior of single units because they conflate neural tuning width and noise and are therefore better used to estimate the uncertainty of decoded stimulus properties.SIGNIFICANCE STATEMENT It is widely recognized that perceptual experience arises from large populations of neurons, rather than a few single units. Yet, much theory and experiment have examined links between single units and perception. Encoding models offer a way to bridge this gap by explicitly interpreting population activity as the aggregate response of many single neurons with known tuning properties. Here we use this approach to examine contrast-invariant orientation tuning of human V1. We show with experiment and modeling that due to lower signal to noise, contrast-invariant orientation tuning of single units manifests in population response functions that broaden at lower contrast, rather than remain contrast-invariant. These results highlight the need for explicit quantitative modeling when making a reverse inference from population response profiles to single-unit responses.

    View details for DOI 10.1523/JNEUROSCI.2453-17.2017

    View details for PubMedID 29167406

  • A Switching Observer for Human Perceptual Estimation. Neuron Laquitaine, S., Gardner, J. L. 2017

    Abstract

    Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read out. We tested whether this basic Bayesian framework could explain human subjects' behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpredicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, which better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.

    View details for DOI 10.1016/j.neuron.2017.12.011

    View details for PubMedID 29290551

  • Adaptable history biases in human perceptual decisions PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M., Gardner, J. L. 2016; 113 (25): E3548-E3557

    Abstract

    When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

    View details for DOI 10.1073/pnas.1518786113

    View details for Web of Science ID 000378272400014

    View details for PubMedID 27330086

  • Cortical Correlates of Human Motion Perception Biases JOURNAL OF NEUROSCIENCE Vintch, B., Gardner, J. L. 2014; 34 (7): 2592-2604

    Abstract

    Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. Although these biases have been observed robustly, their neural underpinning is unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population, and forward-encoding analyses revealed biases toward slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference because they can be interpreted as a prior for slow speeds. Therefore, our data suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence.

    View details for DOI 10.1523/JNEUROSCI.2809-13.2014

    View details for Web of Science ID 000331614700021

    View details for PubMedID 24523549

  • Attentional Enhancement via Selection and Pooling of Early Sensory Responses in Human Visual Cortex NEURON Pestilli, F., Carrasco, M., Heeger, D. J., Gardner, J. L. 2011; 72 (5): 832-846

    Abstract

    The computational processes by which attention improves behavioral performance were characterized by measuring visual cortical activity with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance accompanied by increases in cortical responses. Quantitative analysis revealed that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high-contrast distracters that evoke large responses should negatively impact behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.

    View details for DOI 10.1016/j.neuron.2011.09.025

    View details for Web of Science ID 000297971100016

    View details for PubMedID 22153378

  • A quantitative framework for motion visibility in human cortex. Journal of neurophysiology Birman, D., Gardner, J. L. 2018

    Abstract

    Despite the central use of motion visibility to reveal the neural basis of perception, perceptual decision making, and sensory inference there exists no comprehensive quantitative framework establishing how motion visibility parameters modulate human cortical response. Random-dot motion stimuli can be made less visible by reducing image contrast or motion coherence, or by shortening the stimulus duration. Because each of these manipulations modulates the strength of sensory neural responses they have all been extensively used to reveal cognitive and other non-sensory phenomenon such as the influence of priors, attention, and choice-history biases. However, each of these manipulations is thought to influence response in different ways across different cortical regions and a comprehensive study is required to interpret this literature. Here, human participants observed random-dot stimuli varying across a large range of contrast, coherence, and stimulus durations as we measured blood-oxygen-level dependent responses. We developed a framework for modeling these responses which quantifies their functional form and sensitivity across areas. Our framework demonstrates the sensitivity of all visual areas to each parameter, with early visual areas V1-V4 showing more parametric sensitivity to changes in contrast and V3A and MT to coherence. Our results suggest that while motion contrast, coherence, and duration share cortical representation, they are encoded with distinct functional forms and sensitivity. Thus, our quantitative framework serves as a reference for interpretation of the vast perceptual literature manipulating these parameters and shows that different manipulations of visibility will have different effects across human visual cortex and need to be interpreted accordingly.

    View details for DOI 10.1152/jn.00433.2018

    View details for PubMedID 29995608

  • Task-dependent enhancement of facial expression and identity representations in human cortex NEUROIMAGE Dobs, K., Schultz, J., Buelthoff, I., Gardner, J. L. 2018; 172: 689–702

    Abstract

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.

    View details for DOI 10.1016/j.neuroimage.2018.02.013

    View details for Web of Science ID 000430364100058

    View details for PubMedID 29432802

  • Parietal and prefrontal: categorical differences? Nature neuroscience Birman, D., Gardner, J. L. 2015; 19 (1): 5-7

    View details for DOI 10.1038/nn.4204

    View details for PubMedID 26713741

  • A CASE FOR HUMAN SYSTEMS NEUROSCIENCE NEUROSCIENCE Gardner, J. L. 2015; 296: 130-137

    Abstract

    Can the human brain itself serve as a model for a systems neuroscience approach to understanding the human brain? After all, how the brain is able to create the richness and complexity of human behavior is still largely mysterious. What better choice to study that complexity than to study it in humans? However, measurements of brain activity typically need to be made non-invasively which puts severe constraints on what can be learned about the internal workings of the brain. Our approach has been to use a combination of psychophysics in which we can use human behavioral flexibility to make quantitative measurements of behavior and link those through computational models to measurements of cortical activity through magnetic resonance imaging. In particular, we have tested various computational hypotheses about what neural mechanisms could account for behavioral enhancement with spatial attention (Pestilli et al., 2011). Resting both on quantitative measurements and considerations of what is known through animal models, we concluded that weighting of sensory signals by the magnitude of their response is a neural mechanism for efficient selection of sensory signals and consequent improvements in behavioral performance with attention. While animal models have many technical advantages over studying the brain in humans, we believe that human systems neuroscience should endeavor to validate, replicate and extend basic knowledge learned from animal model systems and thus form a bridge to understanding how the brain creates the complex and rich cognitive capacities of humans.

    View details for DOI 10.1016/j.neuroscience.2014.06.052

    View details for Web of Science ID 000353828300015

    View details for PubMedID 24997268

  • Encoding of graded changes in spatial specificity of prior cues in human visual cortex JOURNAL OF NEUROPHYSIOLOGY Hara, Y., Gardner, J. L. 2014; 112 (11): 2834-2849

    Abstract

    Prior information about the relevance of spatial locations can vary in specificity; a single location, a subset of locations, or all locations may be of potential importance. Using a contrast-discrimination task with four possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level-dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2, or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically defined visual areas were not strictly graded; response magnitude decreased when all 4 locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, although cueing locations increased responses relative to noncueing, this cue sensitivity was not graded with prior probability. Furthermore, contrast sensitivity of cortical responses, which could improve contrast discrimination performance, was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability, selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information.

    View details for DOI 10.1152/jn.00729.2013

    View details for Web of Science ID 000346023000015

    View details for PubMedID 25185808

  • Functional Signalers of Changes in Visual Stimuli: Cortical Responses to Increments and Decrements in Motion Coherence CEREBRAL CORTEX Costagli, M., Ueno, K., Sun, P., Gardner, J. L., Wan, X., Ricciardi, E., Pietrini, P., Tanaka, K., Cheng, K. 2014; 24 (1): 110-118

    Abstract

    How does our brain detect changes in a natural scene? While changes by increments of specific visual attributes, such as contrast or motion coherence, can be signaled by an increase in neuronal activity in early visual areas, like the primary visual cortex (V1) or the human middle temporal complex (hMT+), respectively, the mechanisms for signaling changes resulting from decrements in a stimulus attribute are largely unknown. We have discovered opposing patterns of cortical responses to changes in motion coherence: unlike areas hMT+, V3A and parieto-occipital complex (V6+) that respond to changes in the level of motion coherence monotonically, human areas V4 (hV4), V3B, and ventral occipital always respond positively to both transient increments and decrements. This pattern of responding always positively to stimulus changes can emerge in the presence of either coherence-selective neuron populations, or neurons that are not tuned to particular coherences but adapt to a particular coherence level in a stimulus-selective manner. Our findings provide evidence that these areas possess physiological properties suited for signaling increments and decrements in a stimulus and may form a part of cortical vigilance system for detecting salient changes in the environment.

    View details for DOI 10.1093/cercor/bhs294

    View details for Web of Science ID 000328373300007

    View details for PubMedID 23010749

  • Demonstration of Tuning to Stimulus Orientation in the Human Visual Cortex: A High-Resolution fMRI Study with a Novel Continuous and Periodic Stimulation Paradigm CEREBRAL CORTEX Sun, P., Gardner, J. L., Costagli, M., Ueno, K., Waggoner, R. A., Tanaka, K., Cheng, K. 2013; 23 (7): 1618-1629

    Abstract

    Cells in the animal early visual cortex are sensitive to contour orientations and form repeated structures known as orientation columns. At the behavioral level, there exist 2 well-known global biases in orientation perception (oblique effect and radial bias) in both animals and humans. However, their neural bases are still under debate. To unveil how these behavioral biases are achieved in the early visual cortex, we conducted high-resolution functional magnetic resonance imaging experiments with a novel continuous and periodic stimulation paradigm. By inserting resting recovery periods between successive stimulation periods and introducing a pair of orthogonal stimulation conditions that differed by 90° continuously, we focused on analyzing a blood oxygenation level-dependent response modulated by the change in stimulus orientation and reliably extracted orientation preferences of single voxels. We found that there are more voxels preferring horizontal and vertical orientations, a physiological substrate underlying the oblique effect, and that these over-representations of horizontal and vertical orientations are prevalent in the cortical regions near the horizontal- and vertical-meridian representations, a phenomenon related to the radial bias. Behaviorally, we also confirmed that there exists perceptual superiority for horizontal and vertical orientations around horizontal and vertical meridians, respectively. Our results, thus, refined the neural mechanisms of these 2 global biases in orientation perception.

    View details for DOI 10.1093/cercor/bhs149

    View details for Web of Science ID 000321163700012

    View details for PubMedID 22661413

  • Modulation of Visual Responses by Gaze Direction in Human Visual Cortex JOURNAL OF NEUROSCIENCE Merriam, E. P., Gardner, J. L., Movshon, J. A., Heeger, D. J. 2013; 33 (24): 9879-9889

    Abstract

    To locate visual objects, the brain combines information about retinal location and direction of gaze. Studies in monkeys have demonstrated that eye position modulates the gain of visual signals with "gain fields," so that single neurons represent both retinotopic location and eye position. We wished to know whether eye position and retinotopic stimulus location are both represented in human visual cortex. Using functional magnetic resonance imaging, we measured separately for each of several different gaze positions cortical responses to stimuli that varied periodically in retinal locus. Visually evoked responses were periodic following the periodic retinotopic stimulation. Only the response amplitudes depended on eye position; response phases were indistinguishable across eye positions. We used multivoxel pattern analysis to decode eye position from the spatial pattern of response amplitudes. The decoder reliably discriminated eye position in five of the early visual cortical areas by taking advantage of a spatially heterogeneous eye position-dependent modulation of cortical activity. We conclude that responses in retinotopically organized visual cortical areas are modulated by gain fields qualitatively similar to those previously observed neurophysiologically.

    View details for DOI 10.1523/JNEUROSCI.0500-12.2013

    View details for Web of Science ID 000320235300003

    View details for PubMedID 23761883

  • Learning to Simulate Others' Decisions NEURON Suzuki, S., Harasawa, N., Ueno, K., Gardner, J. L., Ichinohe, N., Haruno, M., Cheng, K., Nakahara, H. 2012; 74 (6): 1125-1137

    Abstract

    A fundamental challenge in social cognition is how humans learn another person's values to predict their decision-making behavior. This form of learning is often assumed to require simulation of the other by direct recruitment of one's own valuation process to model the other's process. However, the cognitive and neural mechanism of simulation learning is not known. Using behavior, modeling, and fMRI, we show that simulation involves two learning signals in a hierarchical arrangement. A simulated-other's reward prediction error processed in ventromedial prefrontal cortex mediated simulation by direct recruitment, being identical for valuation of the self and simulated-other. However, direct recruitment was insufficient for learning, and also required observation of the other's choices to generate a simulated-other's action prediction error encoded in dorsomedial/dorsolateral prefrontal cortex. These findings show that simulation uses a core prefrontal circuit for modeling the other's valuation to generate prediction and an adjunct circuit for tracking behavioral variation to refine prediction.

    View details for DOI 10.1016/j.neuron.2012.04.030

    View details for Web of Science ID 000305659700017

    View details for PubMedID 22726841

  • Feature-Specific Attentional Priority Signals in Human Cortex JOURNAL OF NEUROSCIENCE Liu, T., Hospadaruk, L., Zhu, D. C., Gardner, J. L. 2011; 31 (12): 4484-4495

    Abstract

    Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. Although the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors) and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained functional magnetic resonance imaging response for the attention task compared with a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multivoxel pattern analysis, we were able to decode the attended feature in both early visual areas (primary visual cortex to human motion complex hMT+) and frontal and parietal areas (e.g., intraparietal sulcus areas IPS1-IPS4 and frontal eye fields) that are commonly associated with spatial attention. Furthermore, analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting that different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that, rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different nonspatial features.

    View details for DOI 10.1523/JNEUROSCI.5745-10.2011

    View details for Web of Science ID 000288750700015

    View details for PubMedID 21430149

  • Is cortical vasculature functionally organized? NEUROIMAGE Gardner, J. L. 2010; 49 (3): 1953-1956

    Abstract

    The cortical vasculature is a well-structured and organized system, but the extent to which it is organized with respect to the neuronal functional architecture is unknown. In particular, does vasculature follow the same functional organization as cortical columns? In principle, cortical columns that share tuning for stimulus features like orientation may often be active together and thus require oxygen and metabolic nutrients together. If the cortical vasculature is built to serve these needs, it may also tend to aggregate and amplify orientation specific signals and explain why they are available in fMRI data at very low resolution.

    View details for DOI 10.1016/j.neuroimage.2009.07.004

    View details for Web of Science ID 000273626400003

    View details for PubMedID 19596071

  • Differential roles for frontal eye fields (FEFs) and intraparietal sulcus (IPS) in visual working memory and visual attention JOURNAL OF VISION Offen, S., Gardner, J. L., Schluppeck, D., Heeger, D. J. 2010; 10 (11)

    Abstract

    Cortical activity was measured with functional magnetic resonance imaging to probe the involvement of the superior precentral sulcus (including putative human frontal eye fields, FEFs) and the intraparietal sulcus (IPS) in visual short-term memory and visual attention. In two experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. An earlier study (S. Offen, D. Schluppeck, & D. J. Heeger, 2009) had found a dissociation in early visual cortex that suggested different computational mechanisms underlying the two processes. In contrast, the results reported here show that the patterns of activation in prefrontal and parietal cortex were different from one another but were similar for the two tasks. In particular, the FEF showed evidence for sustained delay period activity for both the working memory and the attention task, while the IPS did not show evidence for sustained delay period activity for either task. The results imply differential roles for the FEF and IPS in these tasks; the results also suggest that feedback of sustained activity from frontal cortex to visual cortex might be gated by task demands.

    View details for DOI 10.1167/10.11.28

    View details for Web of Science ID 000283783500028

    View details for PubMedID 20884523

  • Executed and Observed Movements Have Different Distributed Representations in Human aIPS JOURNAL OF NEUROSCIENCE Dinstein, I., Gardner, J. L., Jazayeri, M., Heeger, D. J. 2008; 28 (44): 11231-11239

    Abstract

    How similar are the representations of executed and observed hand movements in the human brain? We used functional magnetic resonance imaging (fMRI) and multivariate pattern classification analysis to compare spatial distributions of cortical activity in response to several observed and executed movements. Subjects played the rock-paper-scissors game against a videotaped opponent, freely choosing their movement on each trial and observing the opponent's hand movement after a short delay. The identities of executed movements were correctly classified from fMRI responses in several areas of motor cortex, observed movements were classified from responses in visual cortex, and both observed and executed movements were classified from responses in either left or right anterior intraparietal sulcus (aIPS). We interpret above chance classification as evidence for reproducible, distributed patterns of cortical activity that were unique for execution and/or observation of each movement. Responses in aIPS enabled accurate classification of movement identity within each modality (visual or motor), but did not enable accurate classification across modalities (i.e., decoding observed movements from a classifier trained on executed movements and vice versa). These results support theories regarding the central role of aIPS in the perception and execution of movements. However, the spatial pattern of activity for a particular observed movement was distinctly different from that for the same movement when executed, suggesting that observed and executed movements are mostly represented by distinctly different subpopulations of neurons in aIPS.

    View details for DOI 10.1523/JNEUROSCI.3585-08.2008

    View details for Web of Science ID 000260502400014

    View details for PubMedID 18971465

  • Maps of visual space in human occipital cortex are retinotopic, not spatiotopic JOURNAL OF NEUROSCIENCE Gardner, J. L., Merriam, E. P., Movshon, J. A., Heeger, D. J. 2008; 28 (15): 3988-3999

    Abstract

    We experience the visual world as phenomenally invariant to eye position, but almost all cortical maps of visual space in monkeys use a retinotopic reference frame, that is, the cortical representation of a point in the visual world is different across eye positions. It was recently reported that human cortical area MT (unlike monkey MT) represents stimuli in a reference frame linked to the position of stimuli in space, a "spatiotopic" reference frame. We used visuotopic mapping with blood oxygen level-dependent functional magnetic resonance imaging signals to define 12 human visual cortical areas, and then determined whether the reference frame in each area was spatiotopic or retinotopic. We found that all 12 areas, including MT, represented stimuli in a retinotopic reference frame. Although there were patches of cortex in and around these visual areas that were ostensibly spatiotopic, none of these patches exhibited reliable stimulus-evoked responses. We conclude that the early, visuotopically organized visual cortical areas in the human brain (like their counterparts in the monkey brain) represent stimuli in a retinotopic reference frame.

    View details for DOI 10.1523/JNEUROSCI.5476-07.2008

    View details for Web of Science ID 000255012400018

    View details for PubMedID 18400898

    View details for PubMedCentralID PMC2515359

  • A temporal frequency-dependent functional architecture in human V1 revealed by high-resolution fMRI NATURE NEUROSCIENCE Sun, P., Ueno, K., Waggoner, R. A., Gardner, J. L., Tanaka, K., Cheng, K. 2007; 10 (11): 1404-1406

    Abstract

    Although cortical neurons with similar functional properties often cluster together in a columnar organization, only ocular dominance columns, the columnar structure representing segregated anatomical input (from one of the two eyes), have been found in human primary visual cortex (V1). It has yet to be shown whether other columnar organizations that arise only from differential responses to stimulus properties also exist in human V1. Using high-resolution functional magnetic resonance imaging, we have found such a functional architecture containing domains that respond preferentially to either low or high temporal frequency.

    View details for DOI 10.1038/nn1983

    View details for Web of Science ID 000250508400017

    View details for PubMedID 17934459

  • Contrast adaptation and representation in human early visual cortex NEURON Gardner, J. L., Sun, P., Waggoner, R. A., Ueno, K., Tanaka, K., Cheng, K. 2005; 47 (4): 607-620

    Abstract

    The human visual system can distinguish variations in image contrast over a much larger range than measurements of the static relationship between contrast and response in visual cortex would suggest. This discrepancy may be explained if adaptation serves to re-center contrast response functions around the ambient contrast, yet experiments on humans have yet to report such an effect. By using event-related fMRI and a data-driven analysis approach, we found that contrast response functions in V1, V2, and V3 shift to approximately center on the adapting contrast. Furthermore, we discovered that, unlike earlier areas, human V4 (hV4) responds positively to contrast changes, whether increments or decrements, suggesting that hV4 does not faithfully represent contrast, but instead responds to salient changes. These findings suggest that the visual system discounts slow uninformative changes in contrast with adaptation, yet remains exquisitely sensitive to changes that may signal important events in the environment.

    View details for DOI 10.1016/j.neuron.2005.07.016

    View details for Web of Science ID 000231411100016

    View details for PubMedID 16102542

    View details for PubMedCentralID PMC1475737

  • A population decoding framework for motion aftereffects on smooth pursuit eye movements JOURNAL OF NEUROSCIENCE Gardner, J. L., Tokiyama, S. N., Lisberger, S. G. 2004; 24 (41): 9035-9048

    Abstract

    Both perceptual and motor systems must decode visual information from the distributed activity of large populations of cortical neurons. We have sought a common framework for understanding decoding strategies for visually guided movement and perception by asking whether the strong motion aftereffects seen in the perceptual domain lead to similar expressions in motor output. We found that motion adaptation indeed has strong sequelae in the direction and speed of smooth pursuit eye movements. After adaptation with a stimulus that moves in a given direction for 7 sec, the direction of pursuit is repelled from the direction of pursuit targets that move within 90 degrees of the adapting direction. The speed of pursuit decreases for targets that move at the direction and speed of the adapting stimulus and is repelled from the adapting speed in the sense that the decrease either becomes greater or smaller (eventually turning to an increase) when tracking targets move slower or faster than the adapting speed. The effects of adaptation are spatially specific and fixed to the retinal location of the adapting stimulus. The magnitude of adaptation of pursuit speed and direction is uncorrelated, suggesting that the two parameters are decoded independently. Computer simulation of motion adaptation in the middle temporal visual area (MT) shows that vector-averaging decoding of the population response in MT can account for the effects of adaptation on the direction of pursuit. Our results suggest a unified framework for thinking, in terms of population decoding, about motion adaptation for both perception and action.

    View details for DOI 10.1523/JNEUROSCI.0337-04.2004

    View details for Web of Science ID 000224461800015

    View details for PubMedID 15483122

  • Directional anisotropies reveal a functional segregation of visual motion processing for perception and action NEURON Churchland, A. K., Gardner, J. L., Chou, I. H., Priebe, N. J., Lisberger, S. G. 2003; 37 (6): 1001-1011

    Abstract

    Human exhibits an anisotropy in direction perception: discrimination is superior when motion is around horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy. We conclude that MT represents different motion directions uniformly, and subsequent processing creates a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least for visual motion, perception and action are guided by inputs from separate sensory streams. The directional anisotropy of perception appears to originate after the two streams have segregated and downstream from area MT.

    View details for Web of Science ID 000181899600013

    View details for PubMedID 12670428

  • Serial linkage of target selection for orienting and tracking eye movements NATURE NEUROSCIENCE Gardner, J. L., Lisberger, S. G. 2002; 5 (9): 892-899

    Abstract

    Many natural actions require the coordination of two different kinds of movements. How are targets chosen under these circumstances: do central commands instruct different movement systems in parallel, or does the execution of one movement activate a serial chain that automatically chooses targets for the other movement? We examined a natural eye tracking action that consists of orienting saccades and tracking smooth pursuit eye movements, and found strong physiological evidence for a serial strategy. Monkeys chose freely between two identical spots that appeared at different sites in the visual field and moved in orthogonal directions. If a saccade was evoked to one of the moving targets by microstimulation in either the frontal eye field (FEF) or the superior colliculus (SC), then the same target was automatically chosen for pursuit. Our results imply that the neural signals responsible for saccade execution can also act as an internal command of target choice for other movement systems.

    View details for DOI 10.1038/nn897

    View details for Web of Science ID 000177656300020

    View details for PubMedID 12145637

    View details for PubMedCentralID PMC2548313

  • Linked target selection for saccadic and smooth pursuit eye movements JOURNAL OF NEUROSCIENCE Gardner, J. L., Lisberger, S. G. 2001; 21 (6): 2075-2084

    Abstract

    In natural situations, motor activity must often choose a single target when multiple distractors are present. The present paper asks how primate smooth pursuit eye movements choose targets, by analysis of a natural target-selection task. Monkeys tracked two targets that started 1.5 degrees eccentric and moved in different directions (up, right, down, and left) toward the position of fixation. As expected from previous results, the smooth pursuit before the first saccade reflected a vector average of the responses to the two target motions individually. However, post-saccadic smooth eye velocity showed enhancement that was spatially selective for the motion at the endpoint of the saccade. If the saccade endpoint was close to one of the two targets, creating a targeting saccade, then pursuit was selectively enhanced for the visual motion of that target and suppressed for the other target. If the endpoint landed between the two targets, creating an averaging saccade, then post-saccadic smooth eye velocity also reflected a vector average of the two target motions. Saccades with latencies >200 msec were almost always targeting saccades. However, pursuit did not transition from vector-averaging to target-selecting until the occurrence of a saccade, even when saccade latencies were >300 msec. Thus, our data demonstrate that post-saccadic enhancement of pursuit is spatially selective and that noncued target selection for pursuit is time-locked to the occurrence of a saccade. This raises the possibility that the motor commands for saccades play a causal role, not only in enhancing visuomotor transmission for pursuit but also in choosing a target for pursuit.

    View details for Web of Science ID 000167422200029

    View details for PubMedID 11245691

  • Linear and nonlinear contributions to orientation tuning of simple cells in the cat's striate cortex VISUAL NEUROSCIENCE Gardner, J. L., Anzai, A., Ohzawa, I., Freeman, R. D. 1999; 16 (6): 1115-1121

    Abstract

    Orientation selectivity is one of the most conspicuous receptive-field (RF) properties that distinguishes neurons in the striate cortex from those in the lateral geniculate nucleus (LGN). It has been suggested that orientation selectivity arises from an elongated array of feedforward LGN inputs (Hubel & Wiesel, 1962). Others have argued that cortical mechanisms underlie orientation selectivity (e.g. Sillito, 1975; Somers et al., 1995). However, isolation of each mechanism is experimentally difficult and no single study has analyzed both processes simultaneously to address their relative roles. An alternative approach, which we have employed in this study, is to examine the relative contributions of linear and nonlinear mechanisms in sharpening orientation tuning. Since the input stage of simple cells is remarkably linear, the nonlinear contribution can be attributed solely to cortical factors. Therefore, if the nonlinear component is substantial compared to the linear contribution, it can be concluded that cortical factors play a prominent role in sharpening orientation tuning. To obtain the linear contribution, we first measure RF profiles of simple cells in the cat's striate cortex using a binary m-sequence noise stimulus. Then, based on linear spatial summation of the RF profile, we obtain a predicted orientation-tuning curve, which represents the linear contribution. The nonlinear contribution is estimated as the difference between the predicted tuning curve and that measured with drifting sinusoidal gratings. We find that measured tuning curves are generally more sharply tuned for orientation than predicted curves, which indicates that the linear mechanism is not enough to account for the sharpness of orientation-tuning. Therefore, cortical factors must play an important role in sharpening orientation tuning of simple cells. We also examine the relationship of RF shape (subregion aspect ratio) and size (subregion length and width) to orientation-tuning halfwidth. As expected, predicted tuning halfwidths are found to depend strongly on both subregion length and subregion aspect ratio. However, we find that measured tuning halfwidths show only a weak correlation with subregion aspect ratio, and no significant correlation with RF length and width. These results suggest that cortical mechanisms not only serve to sharpen orientation tuning, but also serve to make orientation tuning less dependent on the size and shape of the RF. This ensures that orientation is represented equally well regardless of RF size and shape.

    View details for Web of Science ID 000084409800011

    View details for PubMedID 10614591