Academic Appointments


Honors & Awards


  • Investigator Award in Mathematical Modeling of Living Systems, Simons Foundation (2016)
  • McKnight Scholar Award, McKnight Endowment Fund for Neuroscience (2015)
  • Scholar Award in Human Cognition, James S. McDonnell Foundation (2014)
  • Outstanding Paper Award, Neural Information Processing Systems Foundation (2014)
  • Sloan Research Fellowship, Alfred P. Sloan Foundation (2013)
  • Terman Award, Stanford University (2012)
  • Career Award at the Scientific Interface, Burroughs Wellcome Foundation (2009)
  • Swartz Fellow in Computational Neuroscience, Swartz Foundation (2004)

Professional Education


  • Ph.D., UC Berkeley, Theoretical Physics (2004)
  • M.A., UC Berkeley, Mathematics (2004)
  • M.Eng., MIT, Electrical Engineering and Computer Science (1998)
  • B.S., MIT, Mathematics (1998)
  • B.S., MIT, Physics (1998)
  • B.S., MIT, Electrical Engineering and Computer Science (1998)

Current Research and Scholarly Interests


Theoretical / computational neuroscience

2018-19 Courses


Stanford Advisees


Graduate and Fellowship Programs


All Publications


  • Inferring hidden structure in multilayered neural circuits. PLoS computational biology Maheswaranathan, N., Kastner, D. B., Baccus, S. A., Ganguli, S. 2018; 14 (8): e1006291

    Abstract

    A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model's parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data.

    View details for DOI 10.1371/journal.pcbi.1006291

    View details for PubMedID 30138312

  • Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nature neuroscience Campbell, M. G., Ocko, S. A., Mallory, C. S., Low, I. I., Ganguli, S., Giocomo, L. M. 2018

    Abstract

    To guide navigation, the nervous system integrates multisensory self-motion and landmark information. We dissected how these inputs generate spatial representations by recording entorhinal grid, border and speed cells in mice navigating virtual environments. Manipulating the gain between the animal's locomotion and the visual scene revealed that border cells responded to landmark cues while grid and speed cells responded to combinations of locomotion, optic flow and landmark cues in a context-dependent manner, with optic flow becoming more influential when it was faster than expected. A network model explained these results by revealing a phase transition between two regimes in which grid cells remain coherent with or break away from the landmark reference frame. Moreover, during path-integration-based navigation, mice estimated their position following principles predicted by our recordings. Together, these results provide a theoretical framework for understanding how landmark and self-motion cues combine during navigation to generate spatial representations and guide behavior.

    View details for DOI 10.1038/s41593-018-0189-y

    View details for PubMedID 30038279

  • Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis. Neuron Williams, A. H., Kim, T. H., Wang, F., Vyas, S., Ryu, S. I., Shenoy, K. V., Schnitzer, M., Kolda, T. G., Ganguli, S. 2018

    Abstract

    Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning.

    View details for DOI 10.1016/j.neuron.2018.05.015

    View details for PubMedID 29887338

  • Statistical mechanics of low-rank tensor decomposition Neural Information Processing Systems (NIPS) Kadmon, J., Ganguli, S. 2018
  • Task-Driven Convolutional Recurrent Models of the Visual System Neural Information Processing Systems (NIPS) Nayebi, A., Bear, D., Kubulius, J., Kar, K., Ganguli, S., Di Carlo, J., Sussillo, D., Yamins, D. 2018
  • The emergence of spectral universality in deep networks Artificial Intelligence and Statistics (AISTATS) Pennington, J., Schoenholz, S., Ganguli, S. 2018
  • SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural computation Zenke, F., Ganguli, S. 2018: 1–28

    Abstract

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

    View details for DOI 10.1162/neco_a_01086

    View details for PubMedID 29652587

  • The emergence of multiple retinal cell types through efficient coding of natural movies Neural Information Processing Systems (NIPS) Deny, S., Lindsey, J., Ganguli, S., Ocko, S. 2018
  • An International Laboratory for Systems and Computational Neuroscience NEURON Abbott, L. F., Angelaki, D. E., Carandini, M., Churchland, A. K., Dan, Y., Dayan, P., Deneve, S., Fiete, I., Ganguli, S., Harris, K. D., Hausser, M., Hofer, S., Latham, P. E., Mainen, Z. F., Mrsic-Flogel, T., Paninski, L., Pillow, J. W., Pouget, A., Svoboda, K., Witten, I. B., Zador, A. M., Intl Brain Lab 2017; 96 (6): 1213–18

    Abstract

    The neural basis of decision-making has been elusive and involves the coordinated activity of multiple brain structures. This NeuroView, by the International Brain Laboratory (IBL), discusses their efforts to develop a standardized mouse decision-making behavior, to make coordinated measurements of neural activity across the mouse brain, and to use theory and analyses to uncover the neural computations that support decision-making.

    View details for DOI 10.1016/j.neuron.2017.12.013

    View details for Web of Science ID 000418900200005

    View details for PubMedID 29268092

    View details for PubMedCentralID PMC5752703

  • Cell types for our sense of location: where we are and where we are going NATURE NEUROSCIENCE Hardcastle, K., Ganguli, S., Giocomo, L. M. 2017; 20 (11): 1474–82

    Abstract

    Technological advances in profiling cells along genetic, anatomical and physiological axes have fomented interest in identifying all neuronal cell types. This goal nears completion in specialized circuits such as the retina, while remaining more elusive in higher order cortical regions. We propose that this differential success of cell type identification may not simply reflect technological gaps in co-registering genetic, anatomical and physiological features in the cortex. Rather, we hypothesize it reflects evolutionarily driven differences in the computational principles governing specialized circuits versus more general-purpose learning machines. In this framework, we consider the question of cell types in medial entorhinal cortex (MEC), a region likely to be involved in memory and navigation. While MEC contains subsets of identifiable functionally defined cell types, recent work employing unbiased statistical methods and more diverse tasks reveals unsuspected heterogeneity and adaptivity in MEC firing patterns. This suggests MEC may operate more as a generalist circuit, obeying computational design principles resembling those governing other higher cortical regions.

    View details for DOI 10.1038/nn.4654

    View details for Web of Science ID 000413916800006

    View details for PubMedID 29073649

  • A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex NEURON Hardcastle, K., Maheswaranathan, N., Ganguli, S., Giocomo, L. M. 2017; 94 (2): 375-?

    Abstract

    Medial entorhinal grid cells display strikingly symmetric spatial firing patterns. The clarity of these patterns motivated the use of specific activity pattern shapes to classify entorhinal cell types. While this approach successfully revealed cells that encode boundaries, head direction, and running speed, it left a majority of cells unclassified, and its pre-defined nature may have missed unconventional, yet important coding properties. Here, we apply an unbiased statistical approach to search for cells that encode navigationally relevant variables. This approach successfully classifies the majority of entorhinal cells and reveals unsuspected entorhinal coding principles. First, we find a high degree of mixed selectivity and heterogeneity in superficial entorhinal neurons. Second, we discover a dynamic and remarkably adaptive code for space that enables entorhinal cells to rapidly encode navigational information accurately at high running speeds. Combined, these observations advance our current understanding of the mechanistic origins and functional implications of the entorhinal code for navigation. VIDEO ABSTRACT.

    View details for DOI 10.1016/j.neuron.2017.03.025

    View details for Web of Science ID 000399451400020

    View details for PubMedID 28392071

  • The temporal paradox of Hebbian learning and homeostatic plasticity. Current opinion in neurobiology Zenke, F., Gerstner, W., Ganguli, S. 2017; 43: 166-176

    Abstract

    Hebbian plasticity, a synaptic mechanism which detects and amplifies co-activity between neurons, is considered a key ingredient underlying learning and memory in the brain. However, Hebbian plasticity alone is unstable, leading to runaway neuronal activity, and therefore requires stabilization by additional compensatory processes. Traditionally, a diversity of homeostatic plasticity phenomena found in neural circuits is thought to play this role. However, recent modelling work suggests that the slow evolution of homeostatic plasticity, as observed in experiments, is insufficient to prevent instabilities originating from Hebbian plasticity. To remedy this situation, we suggest that homeostatic plasticity is complemented by additional rapid compensatory processes, which rapidly stabilize neuronal activity on short timescales.

    View details for DOI 10.1016/j.conb.2017.03.015

    View details for PubMedID 28431369

  • A saturation hypothesis to explain both enhanced and impaired learning with enhanced plasticity. eLife Nguyen-Vu, T. B., Zhao, G. Q., Lahiri, S., Kimpo, R. R., Lee, H., Ganguli, S., Shatz, C. J., Raymond, J. L. 2017; 6

    Abstract

    Across many studies, animals with enhanced synaptic plasticity exhibit either enhanced or impaired learning, raising a conceptual puzzle: how enhanced plasticity can yield opposite learning outcomes? Here we show that recent history of experience can determine whether mice with enhanced plasticity exhibit enhanced or impaired learning in response to the same training. Mice with enhanced cerebellar LTD, due to double knockout (DKO) of MHCI H2-K(b)/H2-D(b) (K(b)D(b-/-)), exhibited oculomotor learning deficits. However, the same mice exhibited enhanced learning after appropriate pre-training. Theoretical analysis revealed that synapses with history-dependent learning rules could recapitulate the data, and suggested that saturation may be a key factor limiting the ability of enhanced plasticity to enhance learning. Moreover, optogenetic stimulation designed to saturate LTD produced the same impairment in WT as observed in DKO mice. Overall, our results suggest that recent history of activity and the threshold for synaptic plasticity conspire to effect divergent learning outcomes.

    View details for DOI 10.7554/eLife.20147

    View details for PubMedID 28234229

  • On the expressive power of deep neural networks International Conference on Machine Learning (ICML) Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., Sohl-Dickstein, J. 2017
  • Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice Neural Information Processing Systems (NIPS) Pennington, J., Schoenholz, S., Ganguli, S. 2017
  • Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net Neural Information Processing Systems (NIPS) Ke, R., Goyal, A., Ganguli, S., Bengio, Y. 2017
  • Continual Learning with Intelligent Synapses International Conference on Machine Learning (ICML) Zenke, F., Poole, B., Ganguli, S. 2017
  • Deep information propagation International Conference on Learning Representations (ICLR) Schoenholz, S., Gilmer, J., Ganguli, S., Sohl-Dickstein, J. 2017
  • Social Control of Hypothalamus-Mediated Male Aggression. Neuron Yang, T., Yang, C. F., Chizari, M. D., Maheswaranathan, N., Burke, K. J., Borius, M., Inoue, S., Chiang, M. C., Bender, K. J., Ganguli, S., Shah, N. M. 2017; 95 (4): 955–70.e4

    Abstract

    How environmental and physiological signals interact to influence neural circuits underlying developmentally programmed social interactions such as male territorial aggression is poorly understood. We have tested the influence of sensory cues, social context, and sex hormones on progesterone receptor (PR)-expressing neurons in the ventromedial hypothalamus (VMH) that are critical for male territorial aggression. We find that these neurons can drive aggressive displays in solitary males independent of pheromonal input, gonadal hormones, opponents, or social context. By contrast, these neurons cannot elicit aggression in socially housed males that intrude in another male's territory unless their pheromone-sensing is disabled. This modulation of aggression cannot be accounted for by linear integration of environmental and physiological signals. Together, our studies suggest that fundamentally non-linear computations enable social context to exert a dominant influence on developmentally hard-wired hypothalamus-mediated male territorial aggression.

    View details for DOI 10.1016/j.neuron.2017.06.046

    View details for PubMedID 28757304

    View details for PubMedCentralID PMC5648542

  • Statistical Mechanics of Optimal Convex Inference in High Dimensions PHYSICAL REVIEW X Advani, M., Ganguli, S. 2016; 6 (3)
  • Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression. journal of neuroscience Leong, J. C., Esch, J. J., Poole, B., Ganguli, S., Clandinin, T. R. 2016; 36 (31): 8078-8092

    Abstract

    Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila, direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein-Reichardt correlator and the Barlow-Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation.Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges.

    View details for DOI 10.1523/JNEUROSCI.1272-16.2016

    View details for PubMedID 27488629

    View details for PubMedCentralID PMC4971360

  • An equivalence between high dimensional Bayes optimal inference and M-estimation Neural Information Processing Systems (NIPS) Advani, M., Ganguli, S. 2016
  • Deep Learning Models of the Retinal Response to Natural Scenes. Advances in neural information processing systems McIntosh, L. T., Maheswaranathan, N., Nayebi, A., Ganguli, S., Baccus, S. A. 2016; 29: 1369–77

    Abstract

    A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). An examination of the learned CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.

    View details for PubMedID 28729779

    View details for PubMedCentralID PMC5515384

  • Exponential expressivity in deep neural networks through transient chaos Neural Information Processing Systems (NIPS) Poole, B., Subhaneil, L., Raghu, M., Sohl-Dickstein, J., Ganguli, S. 2016: 3360–3368
  • Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences FRONTIERS IN COMPUTATIONAL NEUROSCIENCE Bouchard, K. E., Ganguli, S., Brainard, M. S. 2015; 9

    View details for DOI 10.3389/fncom.2015.00092

    View details for Web of Science ID 000360179700001

    View details for PubMedID 26257637

  • On simplicity and complexity in the brave new world of large-scale neuroscience CURRENT OPINION IN NEUROBIOLOGY Gao, P., Ganguli, S. 2015; 32: 148-155
  • Environmental Boundaries as an Error Correction Mechanism for Grid Cells NEURON Hardcastle, K., Ganguli, S., Giocomo, L. M. 2015; 86 (3): 827-839

    Abstract

    Medial entorhinal grid cells fire in periodic, hexagonally patterned locations and are proposed to support path-integration-based navigation. The recursive nature of path integration results in accumulating error and, without a corrective mechanism, a breakdown in the calculation of location. The observed long-term stability of grid patterns necessitates that the system either performs highly precise internal path integration or implements an external landmark-based error correction mechanism. To distinguish these possibilities, we examined grid cells in behaving rodents as they made long trajectories across an open arena. We found that error accumulates relative to time and distance traveled since the animal last encountered a boundary. This error reflects coherent drift in the grid pattern. Further, interactions with boundaries yield direction-dependent error correction, suggesting that border cells serve as a neural substrate for error correction. These observations, combined with simulations of an attractor network grid cell model, demonstrate that landmarks are crucial to grid stability.

    View details for DOI 10.1016/j.neuron.2015.03.039

    View details for Web of Science ID 000354069800021

    View details for PubMedID 25892299

  • Evidence for a causal inverse model in an avian cortico-basal ganglia circuit PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Giret, N., Kornfeld, J., Ganguli, S., Hahnloser, R. H. 2014; 111 (16): 6063-6068

    Abstract

    Learning by imitation is fundamental to both communication and social behavior and requires the conversion of complex, nonlinear sensory codes for perception into similarly complex motor codes for generating action. To understand the neural substrates underlying this conversion, we study sensorimotor transformations in songbird cortical output neurons of a basal-ganglia pathway involved in song learning. Despite the complexity of sensory and motor codes, we find a simple, temporally specific, causal correspondence between them. Sensory neural responses to song playback mirror motor-related activity recorded during singing, with a temporal offset of roughly 40 ms, in agreement with short feedback loop delays estimated using electrical and auditory stimulation. Such matching of mirroring offsets and loop delays is consistent with a recent Hebbian theory of motor learning and suggests that cortico-basal ganglia pathways could support motor control via causal inverse models that can invert the rich correspondence between motor exploration and sensory feedback.

    View details for DOI 10.1073/pnas.1317087111

    View details for Web of Science ID 000334694000074

    View details for PubMedID 24711417

  • Identifying and attacking the saddle point problem in high-dimensional non-convex optimization Neural Information Processing Systems (NIPS) Dauphin, Y., Pascanu, R., Gulchere, C., Cho, K., Ganguli, S., Bengio, Y. 2014
  • Exact solutions to the nonlinear dynamics of learning in deep neural networks International Conference on Learning Representations (ICLR) Saxe, A., McClelland, J., Ganguli, S. 2014
  • Fast large scale optimization by unifying stochastic gradient and quasi-Newton methods International Conference on Machine Learning (ICML) Dickstein, J. S., Poole, B., Ganguli, S. 2014
  • Investigating the role of firing-rate normalization and dimensionality reduction in brain-machine interface robustness. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference Kao, J. C., Nuyujukian, P., Stavisky, S., Ryu, S. I., Ganguli, S., Shenoy, K. V. 2013; 2013: 293-298

    Abstract

    The intraday robustness of brain-machine interfaces (BMIs) is important to their clinical viability. In particular, BMIs must be robust to intraday perturbations in neuron firing rates, which may arise from several factors including recording loss and external noise. Using a state-of-the-art decode algorithm, the Recalibrated Feedback Intention Trained Kalman filter (ReFIT-KF) [1] we introduce two novel modifications: (1) a normalization of the firing rates, and (2) a reduction of the dimensionality of the data via principal component analysis (PCA). We demonstrate in online studies that a ReFIT-KF equipped with normalization and PCA (NPC-ReFIT-KF) (1) achieves comparable performance to a standard ReFIT-KF when at least 60% of the neural variance is captured, and (2) is more robust to the undetected loss of channels. We present intuition as to how both modifications may increase the robustness of BMIs, and investigate the contribution of each modification to robustness. These advances, which lead to a decoder achieving state-of-the-art performance with improved robustness, are important for the clinical viability of BMI systems.

    View details for DOI 10.1109/EMBC.2013.6609495

    View details for PubMedID 24109682

  • A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models FRONTIERS IN NEURAL CIRCUITS Hanuschkin, A., Ganguli, S., Hahnloser, R. H. 2013; 7

    Abstract

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

    View details for DOI 10.3389/fncir.2013.00106

    View details for Web of Science ID 000320922000001

    View details for PubMedID 23801941

  • Statistical mechanics of complex neural systems and high dimensional data JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT Advani, M., Lahiri, S., Ganguli, S. 2013
  • A memory frontier for complex synapses Neural Information Processing Systems (NIPS) Lahiri, S., Ganguli, S. 2013
  • Learning hierarchical category structure in deep neural networks Proceedings of the Cognitive Science Society Saxe, A., McClelland, J., Ganguli, S. 2013: 1271–1276
  • Vocal learning with inverse models Principles of Neural Coding Hahnloser, R., Ganguli, S. CRC Press. 2013
  • Spatial Information Outflow from the Hippocampal Circuit: Distributed Spatial Coding and Phase Precession in the Subiculum JOURNAL OF NEUROSCIENCE Kim, S. M., Ganguli, S., Frank, L. M. 2012; 32 (34): 11539-11558

    Abstract

    Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.

    View details for DOI 10.1523/JNEUROSCI.5942-11.2012

    View details for Web of Science ID 000308140500004

    View details for PubMedID 22915100

  • Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis ANNUAL REVIEW OF NEUROSCIENCE, VOL 35 Ganguli, S., Sompolinsky, H. 2012; 35: 485-508

    Abstract

    The curse of dimensionality poses severe challenges to both technical and conceptual progress in neuroscience. In particular, it plagues our ability to acquire, process, and model high-dimensional data sets. Moreover, neural systems must cope with the challenge of processing data in high dimensions to learn and operate successfully within a complex world. We review recent mathematical advances that provide ways to combat dimensionality in specific situations. These advances shed light on two dual questions in neuroscience. First, how can we as neuroscientists rapidly acquire high-dimensional data from the brain and subsequently extract meaningful models from limited amounts of these data? And second, how do brains themselves process information in their intrinsically high-dimensional patterns of neural activity as well as learn meaningful, generalizable models of the external world from limited experience?

    View details for DOI 10.1146/annurev-neuro-062111-150410

    View details for Web of Science ID 000307960400024

    View details for PubMedID 22483042

  • Short-term memory in neuronal networks through dynamical compressed sensing Neural Information Processing Systems (NIPS) Gangui, S., Sompolinsky, H. 2010
  • Feedforward to the Past: The Relation between Neuronal Connectivity, Amplification, and Short-Term Memory NEURON Ganguli, S., Latham, P. 2009; 61 (4): 499-501

    Abstract

    Two studies in this issue of Neuron challenge widely held assumptions about the role of positive feedback in recurrent neuronal networks. Goldman shows that such feedback is not necessary for memory maintenance in a neural integrator, and Murphy and Miller show that it is not necessary for amplification of orientation patterns in V1. Both suggest that seemingly recurrent networks can be feedforward in disguise.

    View details for DOI 10.1016/j.neuron.2009.02.006

    View details for Web of Science ID 000263816300004

    View details for PubMedID 19249270

  • Memory traces in dynamical systems PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Ganguli, S., Huh, D., Sompolinsky, H. 2008; 105 (48): 18970-18975

    Abstract

    To perform nontrivial, real-time computations on a sensory input stream, biological systems must retain a short-term memory trace of their recent inputs. It has been proposed that generic high-dimensional dynamical systems could retain a memory trace for past inputs in their current state. This raises important questions about the fundamental limits of such memory traces and the properties required of dynamical systems to achieve these limits. We address these issues by applying Fisher information theory to dynamical systems driven by time-dependent signals corrupted by noise. We introduce the Fisher Memory Curve (FMC) as a measure of the signal-to-noise ratio (SNR) embedded in the dynamical state relative to the input SNR. The integrated FMC indicates the total memory capacity. We apply this theory to linear neuronal networks and show that the capacity of networks with normal connectivity matrices is exactly 1 and that of any network of N neurons is, at most, N. A nonnormal network achieving this bound is subject to stringent design constraints: It must have a hidden feedforward architecture that superlinearly amplifies its input for a time of order N, and the input connectivity must optimally match this architecture. The memory capacity of networks subject to saturating nonlinearities is further limited, and cannot exceed square root N. This limit can be realized by feedforward structures with divergent fan out that distributes the signal across neurons, thereby avoiding saturation. We illustrate the generality of the theory by showing that memory in fluid systems can be sustained by transient nonnormal amplification due to convective instability or the onset of turbulence.

    View details for DOI 10.1073/pnas.0804451105

    View details for Web of Science ID 000261489100065

    View details for PubMedID 19020074

  • One-dimensional dynamics of attention and decision making in LIP NEURON Ganguli, S., Bisley, J. W., Roitman, J. D., Shadlen, M. N., Goldberg, M. E., Miller, K. D. 2008; 58 (1): 15-25

    Abstract

    Where we allocate our visual spatial attention depends upon a continual competition between internally generated goals and external distractions. Recently it was shown that single neurons in the macaque lateral intraparietal area (LIP) can predict the amount of time a distractor can shift the locus of spatial attention away from a goal. We propose that this remarkable dynamical correspondence between single neurons and attention can be explained by a network model in which generically high-dimensional firing-rate vectors rapidly decay to a single mode. We find direct experimental evidence for this model, not only in the original attentional task, but also in a very different task involving perceptual decision making. These results confirm a theoretical prediction that slowly varying activity patterns are proportional to spontaneous activity, pose constraints on models of persistent activity, and suggest a network mechanism for the emergence of robust behavioral timing from heterogeneous neuronal populations.

    View details for DOI 10.1016/j.neuron.2008.01.038

    View details for Web of Science ID 000254946200006

    View details for PubMedID 18400159

  • Function constrains network architecture and dynamics: A case study on the yeast cell cycle Boolean network PHYSICAL REVIEW E Lau, K., Ganguli, S., Tang, C. 2007; 75 (5)

    Abstract

    We develop a general method to explore how the function performed by a biological network can constrain both its structural and dynamical network properties. This approach is orthogonal to prior studies which examine the functional consequences of a given structural feature, for example a scale free architecture. A key step is to construct an algorithm that allows us to efficiently sample from a maximum entropy distribution on the space of Boolean dynamical networks constrained to perform a specific function, or cascade of gene expression. Such a distribution can act as a "functional null model" to test the significance of any given network feature, and can aid in revealing underlying evolutionary selection pressures on various network properties. Although our methods are general, we illustrate them in an analysis of the yeast cell cycle cascade. This analysis uncovers strong constraints on the architecture of the cell cycle regulatory network as well as significant selection pressures on this network to maintain ordered and convergent dynamics, possibly at the expense of sacrificing robustness to structural perturbations.

    View details for DOI 10.1103/PhysRevE.75.051907

    View details for Web of Science ID 000246890100094

    View details for PubMedID 17677098

  • E10 Orbifolds Journal of High Energy Physics Brown, J., Ganguli, S., Ganor, O., Helfgott, C. 2005; 06 (057)
  • Twisted six dimensional gauge theories on tori, matrix models, and integrable systems JOURNAL OF HIGH ENERGY PHYSICS Ganguli, S., Ganor, O. J., Gill, J. 2004
  • Holographic protection of chronology in universes of the Godel type PHYSICAL REVIEW D Boyda, E. K., Ganguli, S., Horava, P., Varadarajan, U. 2003; 67 (10)