(Stanford University) gave a computational linguistics talk on utterance interpretation to CU Linguistics.Â
Monday, February 17th
4:00-5:30pm
MUEN D430
Title: "Modeling Utterance Interpretation in Context"
´¡²ú²õ³Ù°ù²¹³¦³Ù:Ìý
In communication, listeners are usually able to rapidly infer a speaker's intended meaning, which is often more specific than the literal meaning of an utterance. Previous experimental and theoretical work has highlighted the importance of contextual cues in drawing inferences in comprehension. However, the processes involved in the integration of contextual cues and in the learning of associations between cues and interpretations remain poorly understood. In my talk, I will present two computational models of utterance interpretation in context that allow us to gain insights into these processes.
I will first discuss how listeners integrate one contextual factor, the speaker’s identity, in the interpretation of utterances with uncertainty expressions such as 'might' and 'probably.' These expressions can be used to communicate the likelihood of future events but there crucially exists considerable variability in the mapping between uncertainty expressions and event likelihoods across speakers. I will first show experimental evidence that listeners deal with this variability by adapting to specific speakers' language use. I will then present a Bayesian computational model of this adaptation process couched within the Rational Speech Act framework, and I will discuss what model simulations can reveal about the nature of representations that are updated during adaptation.
In the second part of my talk, I will present a neural network model to predict the strength of scalar inferences from "some" to "some but not all." Recent experimental work has shown that the strength of this inference systematically depends on several linguistic and contextual factors. For example, the presence of a partitive construction increases its strength: humans perceive the inference that Sue did not eat all of the cookies to be stronger after hearing "Sue ate some of the cookies" than after hearing the same utterance without a partitive, "Sue ate some cookies." I will discuss to what extent a neural network model is capable of inferring associations between linguistic cues and scalar inference strength from statistical input, and what this can tell us about the learnability of cue-inference strength associations.
ÃÛÌÒ´«Ã½Æƽâ°æÏÂÔØ the speaker:Â
Sebastian Schuster is a PhD candidate in linguistics at Stanford University and a member of the Interactive Language Processing Lab and the Stanford NLP group. His research focuses on computational models of pragmatic utterance interpretation. He is also a core member of the Universal Dependencies initiative, where he leads efforts to make multilingual dependency representations more useful for natural language understanding tasks. He holds an MS degree in Computer Science from Stanford University, and a BS in Computer Science from the University of Vienna.