Thursday, September 3, 2015

Reproducibility project: A front row seat

A recent paper in Science reports the results of a large-scale effort to test reproducibility in psychological science. The results have caused much discussion (as well they should) in both general public and science forums. I thought I would offer my perspective as the lead author of one of the studies that was included in the reproducibility analysis. I had heard about the project even before being contacted to participate and one of the things that appealed to me about it was that they were trying to be unbiased in their selection of studies for replication: all papers published in three prominent journals in 2008.  Jim Magnuson and I had published a paper in one of those journals (Journal of Experimental Psychology: Learning, Memory, & Cognition) in 2008 (Mirman & Magnuson, 2008), so I figured I would hear from them sooner or later. 

Friday, June 19, 2015

Zeno's paradox of teaching

I've wrapped up my Spring term teaching and received my teaching evals. Now that I've (finally) had a chance to teach the same class a few times, I am starting to believe in what I call Zeno's Paradox of Teaching: every time I teach a class, my improvement in teaching quality is half the distance between quality of the last time I taught it and my maximum ability to teach that material.
If I'm right about this, then I think it means that it's important to think long-term when approaching teaching:
  1. New faculty (like me) should start by teaching primarily core courses, ones that are offered every year, have good support materials, and provide a consistent opportunity for improvement. Specialized seminars can be fun to teach, but if they're not going to be offered every year, then improvement will be slow.
  2. Don't drive yourself (myself) crazy trying to teach the "perfect" class on your (my) first time teaching. Try to do a good job and next time try to improve on it as much as possible.
  3. Zeno's paradox means that I'll never teach quite as well as I think I could teach. The positive message there is that one should continue trying to come up with creative ways to improve a course. The warning there is that perfection is not an appropriate standard and not to be too hard on oneself for failing to reach it.

Monday, June 8, 2015

A little growth curve analysis Q&A

I had an email exchange with Jeff Malins, who asked several questions about growth curve analysis. I often get questions of this sort and Jeff agreed to let me post excerpts from our (email) conversation. The following has been lightly edited for clarity and to be more concise.

Sunday, June 7, 2015

Job Opening: MRRI Institute Investigator (all levels) -- Language and Cognition in Neuropsychological Populations

Moss Rehabilitation ResearchInstitute (MRRI) seeks an Institute Investigator to join our historic program in language and cognition and help build the next generation of translational neuroscience/neurorehab research.

The successful applicant is expected to conduct an independent program of research and to participate in research collaborations within and outside MRRI. The ideal candidate is a cognitive, clinical, or neuroscientist or speech-language pathologist who studies language or related cognitive disorders, and who may also conduct research in translating basic science findings to improve clinical practice. Preference will be given to candidates who complement the faculty’s interests in areas like language processing, language learning, semantics, action planning, cognitive control, neuromodulation, neuroplasticity, and/or lesion-symptom mapping

Monday, April 20, 2015

Plotting Factor Analysis Results

A recent factor analysis project (as discussed previously here, here, and here) gave me an opportunity to experiment with some different ways of visualizing highly multidimensional data sets. Factor analysis results are often presented in tables of factor loadings, which are good when you want the numerical details, but bad when you want to convey larger-scale patterns – loadings of 0.91 and 0.19 look similar in a table but very different in a graph. The detailed code is posted on RPubs because embedding the code, output, and figures in a webpage is much, much easier using RStudio's markdown functions. That version shows how to get these example data and how to format them correctly for these plots. Here I will just post the key plot commands and figures those commands produce. 

Aphasia factors vs. subtypes

One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see Part 1 and Part 2) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.

The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). 

Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like z-scores: positive means better-than-average performance, negative means poorer-than-average performance.

Credit: Mirman et al. (in press), Neuropsychologia
At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:
As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.

In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label. Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping. Nature Communications, 6 (6762), 1-9. DOI: 10.1038/ncomms7762.
Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2015.02.014.

Friday, April 17, 2015

Mapping the language system: Part 2

This is the second of a multi-part post about a pair of papers that just came out (Mirman et al., 2015, in press). Part 1 was about the behavioral data: we started with 17 behavioral measures from 99 participants with aphasia following left hemisphere stroke. Using factor analysis, we reduced those 17 measures to 4 underlying factors: Semantic Recognition, Speech Production, Speech Recognition, and Semantic Errors. For each of these factors, we then used voxel-based lesion-symptom mapping (VLSM) to identify the left hemisphere regions where stroke damage was associated with poorer performance.