Thursday, September 3, 2015

Reproducibility project: A front row seat

A recent paper in Science reports the results of a large-scale effort to test reproducibility in psychological science. The results have caused much discussion (as well they should) in both general public and science forums. I thought I would offer my perspective as the lead author of one of the studies that was included in the reproducibility analysis. I had heard about the project even before being contacted to participate and one of the things that appealed to me about it was that they were trying to be unbiased in their selection of studies for replication: all papers published in three prominent journals in 2008.  Jim Magnuson and I had published a paper in one of those journals (Journal of Experimental Psychology: Learning, Memory, & Cognition) in 2008 (Mirman & Magnuson, 2008), so I figured I would hear from them sooner or later. 

Friday, June 19, 2015

Zeno's paradox of teaching

I've wrapped up my Spring term teaching and received my teaching evals. Now that I've (finally) had a chance to teach the same class a few times, I am starting to believe in what I call Zeno's Paradox of Teaching: every time I teach a class, my improvement in teaching quality is half the distance between quality of the last time I taught it and my maximum ability to teach that material.
If I'm right about this, then I think it means that it's important to think long-term when approaching teaching:
  1. New faculty (like me) should start by teaching primarily core courses, ones that are offered every year, have good support materials, and provide a consistent opportunity for improvement. Specialized seminars can be fun to teach, but if they're not going to be offered every year, then improvement will be slow.
  2. Don't drive yourself (myself) crazy trying to teach the "perfect" class on your (my) first time teaching. Try to do a good job and next time try to improve on it as much as possible.
  3. Zeno's paradox means that I'll never teach quite as well as I think I could teach. The positive message there is that one should continue trying to come up with creative ways to improve a course. The warning there is that perfection is not an appropriate standard and not to be too hard on oneself for failing to reach it.

Monday, June 8, 2015

A little growth curve analysis Q&A

I had an email exchange with Jeff Malins, who asked several questions about growth curve analysis. I often get questions of this sort and Jeff agreed to let me post excerpts from our (email) conversation. The following has been lightly edited for clarity and to be more concise.

Sunday, June 7, 2015

Job Opening: MRRI Institute Investigator (all levels) -- Language and Cognition in Neuropsychological Populations

Moss Rehabilitation ResearchInstitute (MRRI) seeks an Institute Investigator to join our historic program in language and cognition and help build the next generation of translational neuroscience/neurorehab research.

The successful applicant is expected to conduct an independent program of research and to participate in research collaborations within and outside MRRI. The ideal candidate is a cognitive, clinical, or neuroscientist or speech-language pathologist who studies language or related cognitive disorders, and who may also conduct research in translating basic science findings to improve clinical practice. Preference will be given to candidates who complement the faculty’s interests in areas like language processing, language learning, semantics, action planning, cognitive control, neuromodulation, neuroplasticity, and/or lesion-symptom mapping

Monday, April 20, 2015

Plotting Factor Analysis Results

A recent factor analysis project (as discussed previously here, here, and here) gave me an opportunity to experiment with some different ways of visualizing highly multidimensional data sets. Factor analysis results are often presented in tables of factor loadings, which are good when you want the numerical details, but bad when you want to convey larger-scale patterns – loadings of 0.91 and 0.19 look similar in a table but very different in a graph. The detailed code is posted on RPubs because embedding the code, output, and figures in a webpage is much, much easier using RStudio's markdown functions. That version shows how to get these example data and how to format them correctly for these plots. Here I will just post the key plot commands and figures those commands produce. 

Aphasia factors vs. subtypes

One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see Part 1 and Part 2) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.

The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). 

Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like z-scores: positive means better-than-average performance, negative means poorer-than-average performance.


Credit: Mirman et al. (in press), Neuropsychologia
At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:
As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.

In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label.


ResearchBlogging.org Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping. Nature Communications, 6 (6762), 1-9. DOI: 10.1038/ncomms7762.
Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2015.02.014.

Friday, April 17, 2015

Mapping the language system: Part 2

This is the second of a multi-part post about a pair of papers that just came out (Mirman et al., 2015, in press). Part 1 was about the behavioral data: we started with 17 behavioral measures from 99 participants with aphasia following left hemisphere stroke. Using factor analysis, we reduced those 17 measures to 4 underlying factors: Semantic Recognition, Speech Production, Speech Recognition, and Semantic Errors. For each of these factors, we then used voxel-based lesion-symptom mapping (VLSM) to identify the left hemisphere regions where stroke damage was associated with poorer performance. 

Thursday, April 16, 2015

Mapping the language system: Part 1

My colleagues and I have a pair of papers coming out in Nature Communications and Neuropsychologia that I'm particularly excited about. The data came from Myrna Schwartz's long-running anatomical case series project in which behavioral and structural neuroimaging data were collected from a large sample of individuals with aphasia following left hemisphere stroke. We pulled together data from 17 measures of language-related performance for 99 participants, each of those participants was also able to provide high-quality structural neuroimaging data to localize their stroke lesion. The behavioral measures ranged from phonological processing (phoneme discrimination, production of phonological errors during picture naming, etc.) to verbal and nonverbal semantic processing (synonym judgments, Camel and Cactus Test, production of semantic errors during picture naming, etc.). I have a lot to say about our project, so there will be a few posts about it. This first post will focus on the behavioral data.

Tuesday, March 3, 2015

When lexical competition becomes lexical cooperation

Lexical neighborhood effects are one of the most robust findings in spoken word recognition: words with many similar-sounding words ("neighbors") are recognized more slowly and less accurately than words with few neighbors. About 10 years ago, when I was just starting my post-doc training with Jim Magnuson, we wondered about semantic neighborhood effects. We found that things were less straightforward in semantics: near semantic neighbors slowed down visual word recognition, but distant semantic neighbors sped up visual word recognition (Mirman & Magnuson, 2008). I later found that the same pattern in spoken word production (Mirman, 2011). Working with Whit Tabor, we developed a preliminary computational account. Later, when Qi Chen joined my lab at MRRI, we expanded this computational model to capture orthographic, phonological, and semantic neighborhood density effects in visual and spoken word recognition and spoken word production (Chen & Mirman, 2012). The key insight from our model was that neighbors exert both inhibitory and facilitative effects on target word processing with the inhibitory effect dominating for strongly active neighbors and the facilitative effect dominating for weakly active neighbors.

In a new paper soon to be published in Cognitive Science (Chen & Mirman, in press) we test a unique prediction from our model. The idea is that phonological neighborhood effects in spoken word recognition are so robust because phonological neighbors are consistently strongly activated during spoken word recognition. If we can reduce their activation by creating a context in which they are not among the likely targets, then their inhibitory effect will not just get smaller, it will become smaller than the facilitative effect, so the net result will be a flip to a facilitative effect. We tested this by using spoken word-to-picture matching with eye-tracking, more commonly known as the "visual world paradigm". When four (phonologically unrelated) pictures appear on the screen, they provide some semantic information about the likely target word. The longer they are on-screen before the spoken word begins, the more this semantic context will influence which lexical candidates will be activated. At one extreme, without any semantic context, we should see the standard inhibitory effect of phonological neighbors; at the other extreme, if only the pictured items are viable candidates, there should be no effect of phonological neighbors. Here is the cool part (if I may say so): at an intermediate point, the semantic context reduces phonological neighbor activation but doesn't eliminate it, so the neighbors will be weakly active and will produce a facilitative effect. 

We report simulations of our model concretely demonstrating this prediction and an experiment in which we manipulate the preview duration (how long the pictures are displayed before the spoken word starts) as a way of manipulating the strength of semantic context. The results were (mostly) consistent with this prediction. 
At 500ms preview (middle panel), there is a clear facilitative effect of neighborhood density: the target fixation proportions for high density targets (red line) rise faster than for the low density targets (blue line). This did not happen with either the shorter or longer preview duration and is not expected unless the preview provides semantic input that weakens activation of phonological neighbors, thus making their net effect facilitative rather than inhibitory.

I'm excited about this paper because "lexical competition" is such a core concept in spoken word recognition that it is hard to imagine neighborhood density having a facilitative effect, but that's what our model predicted and the eye-tracking results bore it out. This is one of those full-cycle cases where behavioral data led to a theory, which led to a computational model, which made new predictions, which were tested in a behavioral experiment. That's what I was trained to do and it feels good to have actually pulled it off.

As a final meta comment: we owe a big "Thank You" to Keith Apfelbaum, Sheila Blumstein, and Bob McMurray, whose 2011 paper was part of the inspiration for this study. Even more importantly, Keith and Bob shared first their data for our follow-up analyses, then their study materials to help us run our experiment. I think this kind of sharing is hugely important for having a science that truly builds and moves forward in a replicable way, but it is all too rare. Apfelbaum, Blumstein, and McMurray not only ran a good study, they also helped other people build on it, which multiplied their positive contribution to the field. I hope one day we can make this kind of sharing the standard in the field, but until then, I'll just appreciate the people who do it.


ResearchBlogging.org Apfelbaum K S, Blumstein S E, & McMurray B (2011). Semantic priming is affected by real-time phonological competition: evidence for continuous cascading systems. Psychonomic Bulletin & Review, 18 (1), 141-149 PMID: 21327343
Chen Q, & Mirman D (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. Psychological Review, 119 (2), 417-430 PMID: 22352357
Chen Q, & Mirman D (2015). Interaction Between Phonological and Semantic Representations: Time Matters. Cognitive Science (in press) PMID: 25155249
Mirman D (2011). Effects of near and distant semantic neighbors on word production. Cognitive, Affective & Behavioral Neuroscience, 11 (1), 32-43 PMID: 21264640
Mirman D, & Magnuson J S (2008). Attractor dynamics and semantic neighborhood density: processing is slowed by near neighbors and speeded by distant neighbors. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34 (1), 65-79 PMID: 18194055

Monday, February 23, 2015

How to learn R: A flow chart

I often find myself giving people suggestions about how to learn R, so I decided to put together a flow chart. This is geared toward typical psychology or cognitive science researchers planning to do basic data analysis in R. This is how to get started -- it won't make you an expert, but it should get you past your SPSS/Excel addiction. One day I'll expand it to include advanced topics.


Monday, February 2, 2015

My "Top 5 R Functions"

In preparation for a R Workgroup meeting, I started thinking about what would be my "Top 5 R Functions". I ruled out the functions for basic mechanics - save, load, mean, etc. - they're obviously critical, but every programming language has them, so there's nothing especially "R" about them. I also ruled out the fancy statistical analysis functions like (g)lmer -- most people (including me) start using R because they want to run those analyses so it seemed a little redundant. I started using R because I wanted to do growth curve analysis, so it seems like a weak endorsement to say that I like R because it can do growth curve analysis. No, I like R because it makes (many) somewhat complex data operations really, really easy. Understanding how take advantage of these R functions is what transformed my view of R from purely functional (I need to do analysis X and R has functions for doing analysis X) to an all-purpose tool that allows me to do data processing, management, analysis, and visualization extremely quickly and easily. So, here are the 5 functions that did that for me:

  1. subset() for making subsets of data (natch)
  2. merge() for combining data sets in a smart and easy way
  3. melt() for converting from wide to long data formats
  4. dcast() for converting from long to wide data formats, and for making summary tables
  5. ddply() for doing split-apply-combine operations, which covers a huge swath of the most tricky data operations 
For anyone interested, I posted my R Workgroup notes on how to use these functions on RPubs. Side note: after a little configuration, I found it super easy to write these using knitr, "knit" them into a webpage, and post that page on RPubs.

Conspicuously missing from the above list is ggplot, which I think deserves a special lifetime achievement award for how it has transformed how I think about data exploration and data visualization. I'm planning that for the next R Workgroup meeting.