Skip to content Skip to navigation

Events at CSLI

4th CSLI Workshop on Logic, Rationality & Intelligent Interaction

The purpose of this ongoing initiative is to bring together researchers interested in contacts between logic, philosophy, mathematics, computer science, linguistics, cognitive science, and economics to discuss new dimensions emerging today, such as knowledge, information, computation, and interactive agency.

The workshop continues a tradition of discussion-oriented outreach meetings aimed at fostering community across disciplines and universities, including senior and junior participants.

For detailed info please see: http://www-logic.stanford.edu/events/CSLI2015/CSLI2015.xhtml 

5th CSLI Workshop on Logic, Rationality & Intelligent Interaction

This event continues a long-standing tradition at Stanford of annual workshops in logic, broadly conceived, aimed at fostering discussion across disciplines and universities, with the added goal of involving both junior and senior participants. The content of the workshop is drawn from the disciplines of logic, philosophy, mathematics, computer science, cognitive science, linguistics and economics, with an emphasis on exploring interdisciplinary contacts.

    Logic and Philosophy: Rachael Briggs (Stanford), Hanti Lin (UC Davis), John Perry (Stanford), Jennifer Wang (Stanford)

Cognition Language Workshop | Janet Pierrehumbert

Janet Pierrehumbert - Northwestern; New Zealand Institute of Language, Brain and Behavior

Regularization in Language Learning and Change

Language systems are highly structured. Yet language learners still encounter inconsistent input. Variation is found both across speakers, and within the productions of individual speakers. If learners reproduced all the variation in the input they received, language systems would not be so highly structured. Instead, all variation across speakers in a community would eventually be picked up and reproduced by every individual in the community. Explaining the empirically observed level of regularity in languages requires a theory of regularization as a cognitive process.

This talk will present experimental and computational results on regularization. The experiments are artificial language learning experiments using a novel game-like computer interface. The model introduces a novel mathematical treatment of the nonlinear decision process linking input to output in language learning. Together, the results indicate that:

  • The nonlinearity involved in regularization is sufficiently weak that it can be detected at the micro level (the level of individual experiments) only with very good statistical power.
  • Individual differences in the degree and direction of regularization are considerable.
  • Individual differences, as they interact with social connections,  play a major role in determining which patterns become entrenched as linguistic norms  and which don't in the course of language change. 

The Cognition & Language Workshop is a Geballe Workshop sponsored by the Stanford Humanities Center. We gratefully acknowledge the Humanities Center's support, and additional support from the Center for the Study of Language and Information.

Cognition Language Workshop | Kristen Syrett | Thurs Apr 9, 4pm

CHALLENGES AND SUPPORT FOR VERB LEARNING

Abstract:

Young children the world over appear to expect that a verb presented in a transitive frame surrounded by nouns maps onto a causative meaning and is best associated with an event involving an agent and a patient. At the same time, however, they struggle when a verb appears in an intransitive frame in which two conjoined nouns occupying the subject position. This contrast in performance between the two syntactic environments has been replicated time and again across labs, and has led some researchers to conclude that the fault lies in children’s underdeveloped syntactic representations or in the heuristics they deploy to assign semantic roles to a verb’s arguments. However, I will present the results of a set of word learning studies demonstrating that not only do adults also flounder when presented with a novel verb in an intransitive frame, but when children are provided with semantic support for the form-meaning mapping in the form of an additional informative lexical item or distributional evidence concerning the intended interpretation of the syntactic frame in the discourse, they fare much better with the intransitive frame. These findings suggest that the problem may not be an immature grammar, but rather lack of sufficient information to narrow down the hypothesis space. Verb learning calls upon children’s syntactic, semantic, and pragmatic knowledge. When these aspects of the linguistic system work in concert, mapping form to meaning is facilitated.

Matthew Smith - The Rise of the Neural Subject

The Center for the Explanation of Consciousness at CSLI and the Stanford Humanities Center present a workshop on Interdisciplinary Approaches to Consciousness today at 5:00 PM. All are welcome to attend.

 

Matthew Smith, DLCL, Stanford 

"The Rise of the Neural Subject"

Abstract:

    How did we come to think of the self not as soul, psyche, or mind but as brain and nervous system?  Though much talked about in recent popular-science books such as V. S. Ramachandran’s The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human, Patricia Churchland’s Touching a Nerve: The Self as Brain, and Joseph LeDoux’s The Synaptic Self, the idea is hardly recent.  An understanding of its history may help us to better understand its present and future.

    After sketching a broad history of the formation of this conception of the self, this talk will pay particular attention to cultural transformations in Western Europe and the United States in the mid-19th century.  Concentrating on the period around 1870, we will find that works of art (such as Richard Wagner’s operas, Émile Zola’s novels, and the emergence of Victorian “sensation drama”) combined with neurological research (by Hermann von Helmholtz, Julius Bernstein, and George Miller Beard) to inspire a new conception of consciousness and personhood—a conception we may call the neural subject.

Refreshments will be served.

Sponsored by the Stanford Humanities Center Radway Workshops Program, and the Center for the Explanation of Consciousness, CSLI. 

Meaning in Context (MIC 3)

Bi-annual workshop bringing together linguists, computer scientists, psychologists and philosophers to discuss problems in the intersection of semantics, pragmatics, and cognition.

Participants:

Workgroup 1: Tuesday/Wednesday

Embeddings 2.0: The Lexicon as Memory, animated by Hinrich Schütze and Jay McClelland

for more information about this work group see: http://cis.lmu.de/schuetze/wgmemory.pdf

In NLP, we need knowledge about words. This knowledge today often comes from embeddings learned by word2vec, FastText etc. But these embeddings are limited. We will explore models of memory and learning from cognitive science and machine learning to come up with a foundation for better word representations.

Workgroup 2: Thursday/Friday

Exploring grounded and distributional learning of language, animated by Noah Goodman and Mike Frank

for more information see: http://web.stanford.edu/~azaenen/MIC/MikeandNoah.pdf

Learning language from text corpora ('distributional') allows easy access to very large data, on the other hand learning from situated, or 'grounded', utterances seems closer to what language is actually used for and closer to the human language acquisition problem. a lot of progress has been made recently on both approaches. here we will try to understand the relationship between them, and how they can be combined. for instance we will ask: do (Bayesian) theories of language use prescribe the relationship between grounded and corpus data? can representations (deep-) learned from corpora help to learn grounded meanings?

Workgroup 3: Tuesday/Wednesday

Coping with polysemy, animated by Louise McNally

for more information about this workgroup see: http://web.stanford.edu/~azaenen/MIC/wgpolysemy.pdf

Although distributional/distributed systems have proven fairly good at handling the resolution of polysemy in context, one might wonder whether they could be improved by incorporating a key premise of the Rational Speech Act (RSA) model of utterance meaning: namely, that speakers and hearers carry out probabilistic reasoning about the meaning conveyed by a given expression in context under specific assumptions about a generally limited set of alternative expressions that could have been chosen in that context. The goal of this working group is to consider how distributed representations might be dynamically modulated with information about the contextually-salient alternatives that conversational agents are considering, or, alternatively, how distributed representations could be exploited specifically in interaction with the utterance options component of an RSA model.

Workgroup 4: Thursday/Friday

Neural networks and Textual Inference, animated by Lauri Karttunen and Ignacio Cases

for more information about this workgroup see: http://web.stanford.edu//~azaenen/MIC/inference.pdf

Recent work by Bowman et al. (2015), Rocktäschel et. al. (2016) has shown that given a sufficiently large data set such as the Stanford Natural Language Inference Corpus (SNLI) neural networks can match the performance of classical RTE systems (Dagan et al. 2006) that rely on NLP pipelines with many manually created components and features. The goal of this group is to explore whether neural networks can learn general properties of inference relations that are not represented in existing data sets such as SNLI and SICK, for example, that entailment is a reflexive and transitive relation and contradiction is symmetric.

Email or contact info for questions and comments: Annie Zaenen azaenen@stanford.edu

Language and Natural Reasoning