Tuesday, March 3, 2015

When lexical competition becomes lexical cooperation

Lexical neighborhood effects are one of the most robust findings in spoken word recognition: words with many similar-sounding words ("neighbors") are recognized more slowly and less accurately than words with few neighbors. About 10 years ago, when I was just starting my post-doc training with Jim Magnuson, we wondered about semantic neighborhood effects. We found that things were less straightforward in semantics: near semantic neighbors slowed down visual word recognition, but distant semantic neighbors sped up visual word recognition (Mirman & Magnuson, 2008). I later found that the same pattern in spoken word production (Mirman, 2011). Working with Whit Tabor, we developed a preliminary computational account. Later, when Qi Chen joined my lab at MRRI, we expanded this computational model to capture orthographic, phonological, and semantic neighborhood density effects in visual and spoken word recognition and spoken word production (Chen & Mirman, 2012). The key insight from our model was that neighbors exert both inhibitory and facilitative effects on target word processing with the inhibitory effect dominating for strongly active neighbors and the facilitative effect dominating for weakly active neighbors.

In a new paper soon to be published in Cognitive Science (Chen & Mirman, in press) we test a unique prediction from our model. The idea is that phonological neighborhood effects in spoken word recognition are so robust because phonological neighbors are consistently strongly activated during spoken word recognition. If we can reduce their activation by creating a context in which they are not among the likely targets, then their inhibitory effect will not just get smaller, it will become smaller than the facilitative effect, so the net result will be a flip to a facilitative effect. We tested this by using spoken word-to-picture matching with eye-tracking, more commonly known as the "visual world paradigm". When four (phonologically unrelated) pictures appear on the screen, they provide some semantic information about the likely target word. The longer they are on-screen before the spoken word begins, the more this semantic context will influence which lexical candidates will be activated. At one extreme, without any semantic context, we should see the standard inhibitory effect of phonological neighbors; at the other extreme, if only the pictured items are viable candidates, there should be no effect of phonological neighbors. Here is the cool part (if I may say so): at an intermediate point, the semantic context reduces phonological neighbor activation but doesn't eliminate it, so the neighbors will be weakly active and will produce a facilitative effect. 

We report simulations of our model concretely demonstrating this prediction and an experiment in which we manipulate the preview duration (how long the pictures are displayed before the spoken word starts) as a way of manipulating the strength of semantic context. The results were (mostly) consistent with this prediction. 
At 500ms preview (middle panel), there is a clear facilitative effect of neighborhood density: the target fixation proportions for high density targets (red line) rise faster than for the low density targets (blue line). This did not happen with either the shorter or longer preview duration and is not expected unless the preview provides semantic input that weakens activation of phonological neighbors, thus making their net effect facilitative rather than inhibitory.

I'm excited about this paper because "lexical competition" is such a core concept in spoken word recognition that it is hard to imagine neighborhood density having a facilitative effect, but that's what our model predicted and the eye-tracking results bore it out. This is one of those full-cycle cases where behavioral data led to a theory, which led to a computational model, which made new predictions, which were tested in a behavioral experiment. That's what I was trained to do and it feels good to have actually pulled it off.

As a final meta comment: we owe a big "Thank You" to Keith Apfelbaum, Sheila Blumstein, and Bob McMurray, whose 2011 paper was part of the inspiration for this study. Even more importantly, Keith and Bob shared first their data for our follow-up analyses, then their study materials to help us run our experiment. I think this kind of sharing is hugely important for having a science that truly builds and moves forward in a replicable way, but it is all too rare. Apfelbaum, Blumstein, and McMurray not only ran a good study, they also helped other people build on it, which multiplied their positive contribution to the field. I hope one day we can make this kind of sharing the standard in the field, but until then, I'll just appreciate the people who do it.

ResearchBlogging.org Apfelbaum K S, Blumstein S E, & McMurray B (2011). Semantic priming is affected by real-time phonological competition: evidence for continuous cascading systems. Psychonomic Bulletin & Review, 18 (1), 141-149 PMID: 21327343
Chen Q, & Mirman D (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. Psychological Review, 119 (2), 417-430 PMID: 22352357
Chen Q, & Mirman D (2015). Interaction Between Phonological and Semantic Representations: Time Matters. Cognitive Science (in press) PMID: 25155249
Mirman D (2011). Effects of near and distant semantic neighbors on word production. Cognitive, Affective & Behavioral Neuroscience, 11 (1), 32-43 PMID: 21264640
Mirman D, & Magnuson J S (2008). Attractor dynamics and semantic neighborhood density: processing is slowed by near neighbors and speeded by distant neighbors. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34 (1), 65-79 PMID: 18194055

Monday, February 23, 2015

How to learn R: A flow chart

I often find myself giving people suggestions about how to learn R, so I decided to put together a flow chart. This is geared toward typical psychology or cognitive science researchers planning to do basic data analysis in R. This is how to get started -- it won't make you an expert, but it should get you past your SPSS/Excel addiction. One day I'll expand it to include advanced topics.

Monday, February 2, 2015

My "Top 5 R Functions"

In preparation for a R Workgroup meeting, I started thinking about what would be my "Top 5 R Functions". I ruled out the functions for basic mechanics - save, load, mean, etc. - they're obviously critical, but every programming language has them, so there's nothing especially "R" about them. I also ruled out the fancy statistical analysis functions like (g)lmer -- most people (including me) start using R because they want to run those analyses so it seemed a little redundant. I started using R because I wanted to do growth curve analysis, so it seems like a weak endorsement to say that I like R because it can do growth curve analysis. No, I like R because it makes (many) somewhat complex data operations really, really easy. Understanding how take advantage of these R functions is what transformed my view of R from purely functional (I need to do analysis X and R has functions for doing analysis X) to an all-purpose tool that allows me to do data processing, management, analysis, and visualization extremely quickly and easily. So, here are the 5 functions that did that for me:

  1. subset() for making subsets of data (natch)
  2. merge() for combining data sets in a smart and easy way
  3. melt() for converting from wide to long data formats
  4. dcast() for converting from long to wide data formats, and for making summary tables
  5. ddply() for doing split-apply-combine operations, which covers a huge swath of the most tricky data operations 
For anyone interested, I posted my R Workgroup notes on how to use these functions on RPubs. Side note: after a little configuration, I found it super easy to write these using knitr, "knit" them into a webpage, and post that page on RPubs.

Conspicuously missing from the above list is ggplot, which I think deserves a special lifetime achievement award for how it has transformed how I think about data exploration and data visualization. I'm planning that for the next R Workgroup meeting.

Tuesday, October 7, 2014

Why pursue a Ph.D.?

This video is directed at STEM fields, so I am not sure everything in it applies perfectly to cognitive neuroscience. But, if you're going to go to grad school, I think this is the right kind of perspective to bring:

Why Pursue A Ph.D.? Three Practical Reasons (12-minute video) from Philip Guo on Vimeo.

(via FlowingData)

Wednesday, August 13, 2014

Plotting mixed-effects model results with effects package

As separate by-subjects and by-items analyses have been replaced by mixed-effects models with crossed random effects of subjects and items, I've often found myself wondering about the best way to plot data. The simple-minded means and SE from trial-level data will be inaccurate because they won't take the nesting into account. If I compute subject means and plot those with by-subject SE, then I'm plotting something different from what I analyzed, which is not always terrible, but definitely not ideal. It seems intuitive that the condition means and SE's are computable from the model's parameter estimates, but that computation is not trivial, particularly when you're dealing with interactions. Or, rather, that computation was not trivial until I discovered the effects package.

Tuesday, August 5, 2014

Visualizing Components of Growth Curve Analysis

This is a guest post by Matthew Winn:

One of the more useful skills I’ve learned in the past couple years is growth curve analysis (GCA), which helps me analyze eye-tracking data and other kinds of data that take a functional form. Like some other advanced statistical techniques, it is a procedure that can be done without complete understanding, and is likely to demand more than one explanation before you really “get it”. In this post, I will illustrate the way that I think about it, in hopes that it can “click” for some more people. The objective is to break down a complex curve into individual components.

Friday, April 4, 2014

Flip the script, or, the joys of coord_flip()

Has this ever happened to you?

I hate it when the labels on the x-axis overlap, but this can be hard to avoid. I can stretch the figure out, but then the data become farther apart and the space where I want to put the figure (either in a talk or a paper) may not accommodate that. I've never liked turning the labels diagonally, so recently I've started using coord_flip() to switch the x- and y-axes:
ggplot(chickwts, aes(feed, weight)) + stat_summary(fun.data=mean_se, geom="pointrange") + coord_flip()

It took a little getting used to, but I think this works well. It's especially good for factor analyses (where you have many labeled items):
pc <- principal(Harman74.cor$cov, 4, rotate="varimax")
loadings <- as.data.frame(pc$loadings[, 1:ncol(pc$loadings)])
loadings$Test <- rownames(loadings)

ggplot(loadings, aes(Test, RC1)) + geom_bar() + coord_flip() + theme_bw(base_size=10)
It also works well if you want to plot parameter estimates from a regression model (where the parameter names can get long):
m <- lmer(weight ~ Time * Diet + (Time | Chick), data=ChickWeight, REML=F)
coefs <- as.data.frame(coef(summary(m)))
colnames(coefs) <- c("Estimate", "SE", "tval")
coefs$Label <- rownames(coefs)

ggplot(coefs, aes(Label, Estimate)) + geom_pointrange(aes(ymin = Estimate - SE, ymax = Estimate + SE)) + geom_hline(yintercept=0) + coord_flip() + theme_bw(base_size=10)