
Cog Lunch: James A Michaelov
Description
Zoom Link: https://mit.zoom.us/j/96031648583
Speaker: James A Michaelov
Affiliation: Computational Psycholinguistics Laboratory (Roger Levy)
Title: What can the fields of psycholinguistics and natural language processing learn from each other?
Abstract: While the idea that language comprehension involves prediction has been around since at least the 1960s, advances in natural language processing technology have made it more viable than ever to model this computationally. As language models have increased in size and power, performing better at an ever-wider array of natural language tasks, their predictions also increasingly appear to correlate with the N400, a neural signal of processing difficulty thought to reflect the extent to which a given word is expected based on its preceding context. In fact, the predictions of contemporary large language models can not only be used to model the effects of certain types of stimuli on the amplitude of the N400 response, but can in fact predict single-trial N400 amplitude better than traditional metrics such as cloze probability. With these results in mind, I will discuss how language models can be used to study human language processing, both as a deflationary tool and to support positive claims about the extent to which humans may use language statistics as the basis of prediction, bringing language in line with other cognitive domains. Finally, I will discuss how the close correlation between the predictions of language models and N400 amplitude means that we can use previous psycholinguistic research to identify possible unexpected patterns of behavior in state-of-the-art large language models.