Pages

Tuesday, March 3, 2015

When lexical competition becomes lexical cooperation

Lexical neighborhood effects are one of the most robust findings in spoken word recognition: words with many similar-sounding words ("neighbors") are recognized more slowly and less accurately than words with few neighbors. About 10 years ago, when I was just starting my post-doc training with Jim Magnuson, we wondered about semantic neighborhood effects. We found that things were less straightforward in semantics: near semantic neighbors slowed down visual word recognition, but distant semantic neighbors sped up visual word recognition (Mirman & Magnuson, 2008). I later found that the same pattern in spoken word production (Mirman, 2011). Working with Whit Tabor, we developed a preliminary computational account. Later, when Qi Chen joined my lab at MRRI, we expanded this computational model to capture orthographic, phonological, and semantic neighborhood density effects in visual and spoken word recognition and spoken word production (Chen & Mirman, 2012). The key insight from our model was that neighbors exert both inhibitory and facilitative effects on target word processing with the inhibitory effect dominating for strongly active neighbors and the facilitative effect dominating for weakly active neighbors.

In a new paper soon to be published in Cognitive Science (Chen & Mirman, in press) we test a unique prediction from our model. The idea is that phonological neighborhood effects in spoken word recognition are so robust because phonological neighbors are consistently strongly activated during spoken word recognition. If we can reduce their activation by creating a context in which they are not among the likely targets, then their inhibitory effect will not just get smaller, it will become smaller than the facilitative effect, so the net result will be a flip to a facilitative effect. We tested this by using spoken word-to-picture matching with eye-tracking, more commonly known as the "visual world paradigm". When four (phonologically unrelated) pictures appear on the screen, they provide some semantic information about the likely target word. The longer they are on-screen before the spoken word begins, the more this semantic context will influence which lexical candidates will be activated. At one extreme, without any semantic context, we should see the standard inhibitory effect of phonological neighbors; at the other extreme, if only the pictured items are viable candidates, there should be no effect of phonological neighbors. Here is the cool part (if I may say so): at an intermediate point, the semantic context reduces phonological neighbor activation but doesn't eliminate it, so the neighbors will be weakly active and will produce a facilitative effect. 

We report simulations of our model concretely demonstrating this prediction and an experiment in which we manipulate the preview duration (how long the pictures are displayed before the spoken word starts) as a way of manipulating the strength of semantic context. The results were (mostly) consistent with this prediction. 
At 500ms preview (middle panel), there is a clear facilitative effect of neighborhood density: the target fixation proportions for high density targets (red line) rise faster than for the low density targets (blue line). This did not happen with either the shorter or longer preview duration and is not expected unless the preview provides semantic input that weakens activation of phonological neighbors, thus making their net effect facilitative rather than inhibitory.

I'm excited about this paper because "lexical competition" is such a core concept in spoken word recognition that it is hard to imagine neighborhood density having a facilitative effect, but that's what our model predicted and the eye-tracking results bore it out. This is one of those full-cycle cases where behavioral data led to a theory, which led to a computational model, which made new predictions, which were tested in a behavioral experiment. That's what I was trained to do and it feels good to have actually pulled it off.

As a final meta comment: we owe a big "Thank You" to Keith Apfelbaum, Sheila Blumstein, and Bob McMurray, whose 2011 paper was part of the inspiration for this study. Even more importantly, Keith and Bob shared first their data for our follow-up analyses, then their study materials to help us run our experiment. I think this kind of sharing is hugely important for having a science that truly builds and moves forward in a replicable way, but it is all too rare. Apfelbaum, Blumstein, and McMurray not only ran a good study, they also helped other people build on it, which multiplied their positive contribution to the field. I hope one day we can make this kind of sharing the standard in the field, but until then, I'll just appreciate the people who do it.


ResearchBlogging.org Apfelbaum K S, Blumstein S E, & McMurray B (2011). Semantic priming is affected by real-time phonological competition: evidence for continuous cascading systems. Psychonomic Bulletin & Review, 18 (1), 141-149 PMID: 21327343
Chen Q, & Mirman D (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. Psychological Review, 119 (2), 417-430 PMID: 22352357
Chen Q, & Mirman D (2015). Interaction Between Phonological and Semantic Representations: Time Matters. Cognitive Science (in press) PMID: 25155249
Mirman D (2011). Effects of near and distant semantic neighbors on word production. Cognitive, Affective & Behavioral Neuroscience, 11 (1), 32-43 PMID: 21264640
Mirman D, & Magnuson J S (2008). Attractor dynamics and semantic neighborhood density: processing is slowed by near neighbors and speeded by distant neighbors. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34 (1), 65-79 PMID: 18194055

No comments:

Post a Comment