Showing posts with label individual differences. Show all posts
Showing posts with label individual differences. Show all posts

Tuesday, March 1, 2016

MAPPD 2.0

About 5 or 6 years ago my colleagues at Moss Rehabilitation Research Institute and I made public a large set of behavioral data from language and cognitive tasks performed by people with aphasia. Our goal was to facilitate larger-scale research on spoken language processing and how it is impaired following left hemisphere stroke. We are pleased to announce that we have completed a thorough redesign of Moss Aphasia Psycholinguistics Project Database site. The MAPPD 2.0 interface is much simpler and easier to use, geared toward letting users download the data they want and analyze it themselves. 

The core of this database is single-trial picture naming and word repetition data for over 300 participants (including 20 neurologically intact control participants) with detailed target word and response information. The database also contains basic demographic and clinical information for each participant with aphasia, as well as performance on a host of supplementary tests of speech perception, semantic cognition, short-term/working memory, and sentence comprehension. A more detailed description of the included tests, coding schemes, and usage suggestions is available in our original description of the database (Mirman et al., 2010) and in the site's documentation.

Monday, April 20, 2015

Aphasia factors vs. subtypes

One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see Part 1 and Part 2) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.

The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). 

Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like z-scores: positive means better-than-average performance, negative means poorer-than-average performance.


Credit: Mirman et al. (in press), Neuropsychologia
At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:
As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.

In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label.


ResearchBlogging.org Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping. Nature Communications, 6 (6762), 1-9. DOI: 10.1038/ncomms7762.
Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2015.02.014.

Monday, December 9, 2013

Language in developmental and acquired disorders

As I mentioned in an earlier post, last June I had the great pleasure and honor of participating in a discussion meeting on Language in Developmental and Acquired Disorders hosted by the Royal Society and organized by Dorothy Bishop, Kate Nation, and Karalyn Patterson. Among the many wonderful things about this meeting was that it brought together people who study similar kinds of language deficit issues but in very different populations -- children with developmental language deficits such as dyslexia and older adults with acquired language deficits such as aphasia. Today, the special issue of Philosophical Transactions of the Royal Society B: Biological Sciences containing articles written by the meeting's speakers was published online (Table of Contents).


Monday, November 12, 2012

Complementary taxonomic and thematic semantic systems

I am happy to report that my paper with Kristen Graziano (a Research Assistant in my lab) showing cross-task individual differences in strength of taxonomic vs. thematic semantic relations is in this month's issue of the Journal of Experimental Psychology: General (Mirman & Graziano, 2012a). This paper is part of a cluster of four articles developing the idea that there is a functional and neural dissociation between taxonomic and thematic semantic systems in the human brain.  

First, some definitions: by "taxonomic" relations I mean concepts whose similarity is based on shared features, which is strongly related to shared category membership (for example, dogs and bears share many features, in particular, the cluster of features that categorize them as mammals). By "thematic" relations I mean concepts whose similarity is based on frequent co-occurrence in situations or events (for example, dogs and leashes do not share features and are not members of the same category, but both are frequently involved in the taking-the-dog-for-a-walk event or situation).

Regarding the functional dissociation, I described in an earlier post our finding (Kalenine et al, 2012) that thematic relations are activated faster than taxonomic relations (at least for manipulable artifacts). In this most recent paper we show that the relative degree of activation of taxonomic vs. thematic relations during spoken word comprehension predicts  - at the individual participant level - whether that participant will tend to pick taxonomic or thematic relations in an explicit similarity judgement task. In other words, for some people, taxonomic relations are more salient and for other people thematic relations are more salient, and this difference is consistent across two very different task contexts.

Regarding the neural dissociation, in a voxel-based lesion-symptom mapping study of semantic picture naming errors (i.e., picture naming errors that were semantically related to the target), we found that lesions in the anterior temporal lobe were associated with increased taxonomically-related errors relative to thematically-related errors and lesions in the posterior superior temporal lobe and inferior parietal lobe (a region we refer to as "temporo-parietal cortex" or TPC) were associated with the reverse pattern: increased thematically-related errors relative to taxonomically-related errors (Schwartz et al., 2011). In a follow-up study, we found that individuals with TPC damage showed reduced implicit activation of thematic relations, but not taxonomic relations, during spoken word comprehension (Mirman & Graziano, 2012b).

I think these findings add some important pieces to the puzzle of semantic cognition and we're now working on a theoretical and computational framework for explaining these complementary semantic systems.

ResearchBlogging.org Kalénine S., Mirman D., Middleton E.L., & Buxbaum L.J. (2012). Temporal dynamics of activation of thematic and functional knowledge during conceptual processing of manipulable artifacts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38 (5), 1274-1295 PMID: 22449134
Mirman D., & Graziano K.M. (2012a). Individual differences in the strength of taxonomic versus thematic relations. Journal of Experimental Psychology: General, 141 (4), 601-609 PMID: 22201413
Mirman D., & Graziano K.M. (2012b). Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension. Neuropsychologia, 50 (8), 1990-1997 PMID: 22571932
Schwartz M.F., Kimberg D.Y., Walker G.M., Brecher A., Faseyitan O.K., Dell G.S., Mirman D., & Coslett H.B. (2011). Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 108 (20), 8520-8524 PMID: 21540329

Monday, August 6, 2012

Crawford-Howell (1998) t-test for case-control comparisons

Cognitive neuropsychologists (like me) often need to compare a single case to a small control group, but the standard two-sample t-test does not work for this because the case is only one observation. Several different approaches have been proposed and in a new paper just published in Cortex, Crawford and Garthwaite (2012) demonstrate that the Crawford-Howell (1998) t-test is a better approach (in terms of controlling Type I error rate) than other commonly-used alternatives. As I understand it, the core issue is that with a typical t-test, you're testing whether two means are different (or, for a one-sample t-test, whether one mean is different from some value), so the more observations you have, the better your estimate of the mean(s). In a case-control comparison you want to know how likely it is that the case value came from the distribution of the control data, so even if your control group is very large, the variability is still important -- knowing that your case is below the control mean is not enough, you want to know that it is below 95% (for example) of the controls. That is why, as Crawford and Garthwaite show, Type I error increases with control sample size for the other tests, but not for the Crawford-Howell test.

It is nice to have this method validated by Monte Carlo simulation and I intend to use it next time the need arises. I’ve put together a simple R implementation of it (it takes a single value as case and a vector of values for control and returns a data frame containing the t-value, degrees of freedom, and p-value):
CrawfordHowell <- function(case, control){
  tval <- (case - mean(control)) / (sd(control)*sqrt((length(control)+1) / length(control)))
  degfree <- length(control)-1
  pval <- 2*(1-pt(abs(tval), df=degfree)) #two-tailed p-value
  result <- data.frame(t = tval, df = degfree, p=pval)
  return(result)
}
Created by Pretty R at inside-R.org

ResearchBlogging.org Crawford, J.R., & Howell, D.C. (1998). Comparing an Individual’s Test Score Against Norms Derived from Small Samples. The Clinical Neuropsychologist, 12 (4), 482-486 DOI: 10.1076/clin.12.4.482.7241
Crawford, J. R., & Garthwaite, P. H. (2012). Single-case research in neuropsychology: A comparison of five forms of t-test for comparing a case to controls. Cortex, 48 (8), 1009-1016 DOI: 10.1016/j.cortex.2011.06.021