
Cog Lunch: Thomas Clark
Description
Zoom Link: https://mit.zoom.us/j/91754440226
Speaker: Thomas Clark
Affiliation: CPL / Levy Lab
Title: Modeling Noisy-Channel Comprehension: Incremental Processing and Reanalysis as Probabilistic Inference
Abstract: How are comprehenders able to extract meaning from utterances in the presence of production errors? The Noisy-Channel theory provides an account grounded in Bayesian inference: comprehenders may interpret utterances non-literally when there is an alternative with higher prior probability and a plausible error likelihood. Yet the question of how such alternatives are generated and evaluated is open to debate. One obstacle has been the lack implemented computational models capable of predicting human processing of arbitrary utterances and handling the considerable uncertainty present in building a generative model of “noisy” language. Here, we model noisy-channel processing as approximate probabilistic inference over intended sentences and production errors. We combine Sequential Monte Carlo methods with “rejuvenation” moves to yield an algorithm that is incremental but allows reanalysis of previous material. We demonstrate preliminary evidence that the model reproduces known patterns in human behavior for implausible or erroneous sentences. Future work will also explore our model’s ability to predict human reading behavior. Our results offer a step towards a flexible, algorithmic account of inference during real-world language comprehension.