Showing posts with label cognitive neuroscience. Show all posts
Showing posts with label cognitive neuroscience. Show all posts

Monday, April 20, 2015

Aphasia factors vs. subtypes

One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see Part 1 and Part 2) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.

The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). 

Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like z-scores: positive means better-than-average performance, negative means poorer-than-average performance.


Credit: Mirman et al. (in press), Neuropsychologia
At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:
As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.

In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label.


ResearchBlogging.org Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping. Nature Communications, 6 (6762), 1-9. DOI: 10.1038/ncomms7762.
Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2015.02.014.

Friday, April 17, 2015

Mapping the language system: Part 2

This is the second of a multi-part post about a pair of papers that just came out (Mirman et al., 2015, in press). Part 1 was about the behavioral data: we started with 17 behavioral measures from 99 participants with aphasia following left hemisphere stroke. Using factor analysis, we reduced those 17 measures to 4 underlying factors: Semantic Recognition, Speech Production, Speech Recognition, and Semantic Errors. For each of these factors, we then used voxel-based lesion-symptom mapping (VLSM) to identify the left hemisphere regions where stroke damage was associated with poorer performance. 

Thursday, April 16, 2015

Mapping the language system: Part 1

My colleagues and I have a pair of papers coming out in Nature Communications and Neuropsychologia that I'm particularly excited about. The data came from Myrna Schwartz's long-running anatomical case series project in which behavioral and structural neuroimaging data were collected from a large sample of individuals with aphasia following left hemisphere stroke. We pulled together data from 17 measures of language-related performance for 99 participants, each of those participants was also able to provide high-quality structural neuroimaging data to localize their stroke lesion. The behavioral measures ranged from phonological processing (phoneme discrimination, production of phonological errors during picture naming, etc.) to verbal and nonverbal semantic processing (synonym judgments, Camel and Cactus Test, production of semantic errors during picture naming, etc.). I have a lot to say about our project, so there will be a few posts about it. This first post will focus on the behavioral data.

Monday, December 9, 2013

Language in developmental and acquired disorders

As I mentioned in an earlier post, last June I had the great pleasure and honor of participating in a discussion meeting on Language in Developmental and Acquired Disorders hosted by the Royal Society and organized by Dorothy Bishop, Kate Nation, and Karalyn Patterson. Among the many wonderful things about this meeting was that it brought together people who study similar kinds of language deficit issues but in very different populations -- children with developmental language deficits such as dyslexia and older adults with acquired language deficits such as aphasia. Today, the special issue of Philosophical Transactions of the Royal Society B: Biological Sciences containing articles written by the meeting's speakers was published online (Table of Contents).


Tuesday, October 15, 2013

Graduate student positions available at Drexel University

The Applied Cognitive and Brain Sciences (ACBS) program at Drexel University invites applications for Ph.D. students to begin in the Fall of 2014. Faculty research interests in the ACBS program span the full range from basic to applied science in Cognitive Psychology, Cognitive Neuroscience, and Cognitive Engineering, with particular faculty expertise in computational modeling and electrophysiology. Accepted students will work closely with their mentor in a research-focused setting, housed in a newly-renovated, state-of-the-art facility featuring spacious graduate student offices and collaborative workspaces. Graduate students will also have the opportunity to collaborate with faculty in Clinical Psychology, the School of Biomedical Engineering and Health Sciences, the College of Computing and Informatics, the College of Engineering, the School of Medicine, and the University's new Expressive and Creative Interaction Technologies (ExCITe) Center.

Specific faculty members seeking graduate students, and possible research topics after the jump.

Friday, December 14, 2012

Lateralization of word and face processing

A few weeks ago I was at the annual meeting of the Psychonomic Society where, among other interesting talks, I heard a great one by Marlene Behrmann about her recent work showing that lateralization of visual word recognition drives lateralization of face recognition. Lateralization of word and face processing are among the most classic findings in cognitive neuroscience: in adults, regions in the inferior temporal lobe in the left hemisphere appear to be specialized for recognizing visual (i.e., printed) words and the same regions in the right hemisphere appear to be specialized for recognizing faces. Marlene and her collaborators (David Plaut, Eva Dundas, Adrian Nestor, and others) have shown that these specializations are linked and that the left hemisphere specialization for words seems to drive the right hemisphere specialization for faces. It's a nice combination of: 
  1. Behavioral experiments showing that lateralization for words develops before lateralization for faces, and that reading ability predicts degree of lateralization for faces (Dundas, Plaut, & Behrmann, 2012).
  2. ERP evidence also showing earlier development of lateralization for words than for faces.
  3. Computational modeling showing how this specialization could emerge without pre-defined modules (Plaut & Behrmann, 2011).
  4. Functional imaging evidence that the lateralization is relative: the right fusiform gyrus is more involved in face processing, but the left is involved also (Nestor, Plaut, & Behrmann, 2011).
It's a beautiful example of how different methods can come together to provide a more complete picture of cognitive and neural function.

UPDATE
Less than one week after I posted this, there is a new paper by Behrmann and Plaut (in press, Cerebral Cortex, doi:10.1093/cercor/bhs390) reporting further evidence, this time from cognitive neuropsychology, that lateralization of face and word processing is relative. They tested a group of individuals with left hemisphere damage and deficits in word recognition ("pure alexia") and a group of individuals with right hemisphere damage and deficits in face recognition ("prosopagnosia"). The individuals with pure alexia exhibited mild but reliable face recognition deficits and the individuals with prosopagnosia exhibited mild but reliable word recognition deficits.

ResearchBlogging.orgDundas EM, Plaut DC, & Behrmann M (2012). The Joint Development of Hemispheric Lateralization for Words and Faces. Journal of Experimental Psychology: General. PMID: 22866684. DOI: 10.1037/a0029503.

Nestor A, Plaut DC, & Behrmann M (2011). Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis. Proceedings of the National Academy of Sciences, 108(24), 9998-10003 PMID: 21628569

Plaut DC, & Behrmann M (2011). Complementary neural representations for faces and words: a computational exploration. Cognitive Neuropsychology, 28(3-4), 251-275 PMID: 22185237

Monday, November 12, 2012

Complementary taxonomic and thematic semantic systems

I am happy to report that my paper with Kristen Graziano (a Research Assistant in my lab) showing cross-task individual differences in strength of taxonomic vs. thematic semantic relations is in this month's issue of the Journal of Experimental Psychology: General (Mirman & Graziano, 2012a). This paper is part of a cluster of four articles developing the idea that there is a functional and neural dissociation between taxonomic and thematic semantic systems in the human brain.  

First, some definitions: by "taxonomic" relations I mean concepts whose similarity is based on shared features, which is strongly related to shared category membership (for example, dogs and bears share many features, in particular, the cluster of features that categorize them as mammals). By "thematic" relations I mean concepts whose similarity is based on frequent co-occurrence in situations or events (for example, dogs and leashes do not share features and are not members of the same category, but both are frequently involved in the taking-the-dog-for-a-walk event or situation).

Regarding the functional dissociation, I described in an earlier post our finding (Kalenine et al, 2012) that thematic relations are activated faster than taxonomic relations (at least for manipulable artifacts). In this most recent paper we show that the relative degree of activation of taxonomic vs. thematic relations during spoken word comprehension predicts  - at the individual participant level - whether that participant will tend to pick taxonomic or thematic relations in an explicit similarity judgement task. In other words, for some people, taxonomic relations are more salient and for other people thematic relations are more salient, and this difference is consistent across two very different task contexts.

Regarding the neural dissociation, in a voxel-based lesion-symptom mapping study of semantic picture naming errors (i.e., picture naming errors that were semantically related to the target), we found that lesions in the anterior temporal lobe were associated with increased taxonomically-related errors relative to thematically-related errors and lesions in the posterior superior temporal lobe and inferior parietal lobe (a region we refer to as "temporo-parietal cortex" or TPC) were associated with the reverse pattern: increased thematically-related errors relative to taxonomically-related errors (Schwartz et al., 2011). In a follow-up study, we found that individuals with TPC damage showed reduced implicit activation of thematic relations, but not taxonomic relations, during spoken word comprehension (Mirman & Graziano, 2012b).

I think these findings add some important pieces to the puzzle of semantic cognition and we're now working on a theoretical and computational framework for explaining these complementary semantic systems.

ResearchBlogging.org Kalénine S., Mirman D., Middleton E.L., & Buxbaum L.J. (2012). Temporal dynamics of activation of thematic and functional knowledge during conceptual processing of manipulable artifacts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38 (5), 1274-1295 PMID: 22449134
Mirman D., & Graziano K.M. (2012a). Individual differences in the strength of taxonomic versus thematic relations. Journal of Experimental Psychology: General, 141 (4), 601-609 PMID: 22201413
Mirman D., & Graziano K.M. (2012b). Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension. Neuropsychologia, 50 (8), 1990-1997 PMID: 22571932
Schwartz M.F., Kimberg D.Y., Walker G.M., Brecher A., Faseyitan O.K., Dell G.S., Mirman D., & Coslett H.B. (2011). Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 108 (20), 8520-8524 PMID: 21540329

Monday, October 29, 2012

Embodied cognition: Theoretical claims and theoretical predictions

I'm at the Annual Meeting of the Academy of Aphasia (50th Anniversary!) in San Francisco. I like the Academy meeting because it is smaller than the other meetings that I attend and it brings together an interesting interdisciplinary group of people that are very passionate about the neural basis of language and acquired language disorders. One of the big topics of discussion on the first day of the meeting was embodied cognition, particularly its claim that semantic knowledge is grounded in sensory and motor representations as opposed to amodal representations. Lawrence Barsalou (e.g., Barsalou, 2008) and Friedemann Pulvermuller (e.g., Carota, Moseley, & Pulvermuller, 2012) are among the most active advocates of this view and many, many others have provided interesting and compelling data to support it. Nevertheless, the view remains controversial. Alfonso Caramazza and Bradford Mahon, in particular, have been vocal critics of the embodied view (e.g., Mahon & Caramazza, 2008). 

Embodied cognition is an important concept and many researchers are very actively studying it, both from negative and positive perspectives, so it would be completely hopeless for me to try to summarize all of the evidence in a simple blog post. Instead I want to focus on one very specific issue that I have seen raised on several occasions (including here at the Academy meeting). Many experiments that are taken to support embodied cognition use materials for which the semantics have very clear sensory-motor content. For example, in a study of verb comprehension, the materials might be words such as "kick", "scratch", and "lick" that strongly involve different motor effectors (foot, hand, and mouth) and the prediction is that there should be clearly different patterns of activation in primarily motor control areas of the brain corresponding to those effectors. Setting aside specific controversies regarding those studies, critics of embodied cognition sometimes say something along the lines of "But what about verbs that don't have obvious motor components, such as 'melt' and 'remember'? Those couldn't be embodied in the motor strip!"

I think this question is conflating the general theoretical claim of embodied cognition -- that semantic knowledge is grounded in sensory and motor representations -- and the specific contexts where that general claim makes testable predictions. Because the motor strip is well-characterized and quite consistent across individuals, it is fairly straightforward to predict that verbs which have clear and very different motoric meanings should have very different neural correlates in the motor strip. This does not mean that other verbs are not embodied! Only that those other verbs don't make easily testable predictions. If the neural representation of temperature were well-characterized, we might be able to make clear predictions about verbs like "melt" and "freeze" and "boil". The same goes for abstract nouns, which are often considered to be a challenge for embodied cognition theories because they don't have simple sensory-motor bases. My take is that abstract noun meanings representations are just as embodied as concrete noun meanings, but they have more variable and diffuse representations, so they are harder to study. So, for example, the representation of "freedom" might involve visual representations of the Statue of Liberty for some people and open fields for other people, etc., so it is harder to measure this visual grounding because it is different for different people. Whereas the semantic representation of a concrete concept like "telephone" is going to be much more consistent across people because we all have more or less the same sensory and motor experiences with telephones.

The bottom line is that it is important to distinguish between the broad theoretical claim of embodied cognition, which is meant to apply to all semantic representations, and the subset of cases where this claim makes clear, testable predictions. Extending embodied cognition to the more difficult cases is certainly an important line of work (Barsalou, for example, is actively working on the representation of emotion and emotion words), but the fact that this extension is not yet complete is not, in itself, evidence that the theory is fundamentally flawed.

ResearchBlogging.org Barsalou, L. (2008). Grounded Cognition Annual Review of Psychology, 59 (1), 617-645 DOI: 10.1146/annurev.psych.59.103006.093639
Carota F, Moseley R, & Pulvermüller F (2012). Body-part-specific representations of semantic noun categories. Journal of Cognitive Neuroscience, 24 (6), 1492-1509 PMID: 22390464
Mahon, B., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content Journal of Physiology-Paris, 102 (1-3), 59-70 DOI: 10.1016/j.jphysparis.2008.03.004

Thursday, August 16, 2012

Brain > Mind?

My degrees are in psychology, but I consider myself a (cognitive) neuroscientist. That's because I am interested in how the mind works and I think studying the brain can give us important and useful insights into mental functioning. But it is important not to take this too far. In particular, I think it is unproductive to take the extreme reductionist position that "the mind is merely the brain". I've spelled out my position (which I think is shared by many cognitive neuroscientists) in a recent discussion on the Cognitive Science Q&A site cogsci.stackexchange.com. The short version is that I think it is trivially true that the mind is just the brain, but the brain is just molecules, which are just atoms, which are just particles, etc., etc. and if you're interested in understanding human behavior, particle physics is of little use. In other words, when I talk about the mind, I'm talking about a set of physical/biological processes that are best described at the level of organism behavior.

The issue of separability of the mind and brain is also important when considering personal responsibility, as John Monterosso and Barry Schwartz pointed out in a recent piece in the New York Times and in their study (Monterosso, Royzman, & Schwartz, 2005). (Full disclosure: Barry's wife, Myrna Schwartz, is a close colleague at MRRI). Their key finding was that perpetrators of crimes were judged to be less culpable given a physiological explanation (such as neurotransmitter imbalance) than an experiential imbalance (such as having been abused as a child), even though the link between the explanation and the behavior was matched. That is, when participants were told that (for example) 20% of people with this neurotransmitter imbalance commit such crimes or 20% of people who had been abused as children commit such crimes, the ones with the neurotransmitter imbalance were judged to be less culpable. 

Human behavior is complex and explanations can be framed at different levels of analysis. Neuroscience can provide important insights and constraints for these explanations, but precisely because psychological processes are based in neural processes, neural processes cannot be any more "automatic" than psychological processes, nor can neural evidence be any more "real" than behavioral evidence.

ResearchBlogging.org Monterosso, J., Royzman, E.B., & Schwartz, B. (2005). Explaining Away Responsibility: Effects of Scientific Explanation on Perceived Culpability Ethics & Behavior, 15 (2), 139-158 DOI: 10.1207/s15327019eb1502_4

Friday, August 3, 2012

A lexicon without semantics?

I spend a lot of time thinking about words. The reason I am so focused on words is that they sit right at that fascinating boundary between “perception” and “cognition”. Recognizing a spoken word is essentially a (rather difficult) pattern recognition problem: there is a complex and variable perceptual signal that needs to be mapped to a particular word object. But what is that word object? Is it just an entry in some mental list of known words? Are perceptual properties preserved or is it completely abstracted from the surface form? Does the word object include the meaning of the word, like a dictionary entry? The entire contents of all possible meanings or just some context-specific subset?

At least going back to Morton’s (1961) “logogen” model, and including the work of Patterson & Shewell (1987) and Coltheart and colleagues (e.g., Coltheart et al., 2001), researchers have argued that the lexicon (or lexicons) must represent words in a way that is abstracted from the surface form and independent of meaning. In part, this argument was based on evidence that some individuals with substantial semantic impairments could nevertheless reasonably accurately distinguish real words from fake words (the “lexical decision” task).

An alternative approach, based on parallel distributed processing and emphasizing emergent representations (e.g., McClelland, 2010), argues that the “lexicon” is really just the intermediate representation between perceptual and semantic levels, so it will necessarily have some properties of both. Michael Ramscar conducted a very elegant set of experiments showing how semantic information infiltrates past-tense formation (Ramscar, 2002): given a novel verb like “sprink”, if participants were led to believe that it meant something like “drink”, they tended to say that the past tense should be “sprank”, but if they were led to believe that it meant something like “wink” or “blink”, then they tended to say that the past tense should be “sprinked”. In other words, past-tense formation is influenced both by meaning and surface similarity. Tim Rogers and colleagues (Rogers et al., 2004) showed that the apparent ability of semantically-impaired individuals to perform lexical decision was really based on visual familiarity: these individuals consistently chose the spelling that was more typical of English, regardless of whether it was correct or not for this particular word (for example, “grist” over “gryst”, but also “trist” over “tryst”; “cheese” over “cheize”, but also “seese” over “seize”).

Data like these have been enough to convince me that the PDP view is right, but there are a few counter-examples that I am not sure how to explain. Among them is a recent short case report of a patient with a severe semantic deficit (semantic dementia), but remarkably good ability to solve anagrams (Teichmann et al., 2012). She was able to solve 18 out of 20 anagrams (“H-E-T-A-N-L-E-P” --> “ELEPHANT”) without knowing what any of the 20 words meant. Neurologically intact age-matched controls solved essentially the same number of anagrams (17.4 ± 1.5) in the same amount of time. Related cases of “hyperlexia” (good word reading with impaired comprehension) have also been reported (e.g., Castles et al., 2010). I can imagine how a PDP account of these data might look, but to my knowledge, it has not been developed.

References
Castles, A., Crichton, A., & Prior, M. (2010). Developmental dissociations between lexical reading and comprehension: Evidence from two cases of hyperlexia. Cortex, 46(10), 1238-47. doi:10.1016/j.cortex.2010.06.016 Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108(1), 204-256. 
McClelland, J. L. (2010). Emergence in Cognitive Science. Topics in Cognitive Science, 2(4), 751-770. doi:10.1111/j.1756-8765.2010.01116.x. 
Morton, J. (1961). Reading, context and the perception of words. Unpublished PhD thesis, University of Reading, Reading, England.
Patterson, K., & Shewell, C. (1987). Speak and spell: Dissociations and word-class effects. In M. Coltheart, G. Sartori, & R. Job (Eds.), The cognitive neuropsychology of language (pp. 273-294). London: Erlbaum.
Ramscar, M. (2002). The role of meaning in inflection: why the past tense does not require a rule. Cognitive Psychology, 45(1), 45-94.
Rogers, T. T., Lambon Ralph, M. A., Hodges, J. R., & Patterson, K. E. (2004). Natural selection: The impact of semantic impairment on lexical and object decision. Cognitive Neuropsychology, 21(2-4), 331-352.
Teichmann et al. (2012). A mental lexicon without semantics. Neurology. DOI: 10.1212/WNL.0b013e3182635749

Thursday, August 2, 2012

Statistical models vs. cognitive models


My undergraduate and graduate training in psychology and cognitive neuroscience focused on computational modeling and behavioral experimentation: implementing concrete models to test cognitive theories by simulation and evaluating predictions from those models with behavioral experiments. During this time, good ol’ t-test was enough statistics for me. I continued this sort of work during my post-doctoral fellowship, but as I became more interested in studying the time course of cognitive processing, I had to learn about statistical modeling, specifically, growth curve analysis (multilevel regression) for time series data. These two kinds of modeling – computational/cognitive and statistical – are often conflated, but I believe they are very different and serve complementary purposes in cognitive science and cognitive neuroscience.

It will help to have some examples of what I mean when I say that statistical and cognitive models are sometimes conflated. I have found that computational modeling talks sometimes provoke a certain kind of skeptic to ask “With a sufficient number of free parameters it is possible to fit any data set, so how many parameters does your model have?” The first part of that question is true in a strictly mathematical sense: for example, a Taylor series polynomial can be used to approximate any function with arbitrary precision. But this is not how cognitive modeling works. Cognitive models are meant to implement theoretical principles, not arbitrary mathematical functions, and although they always have some flexible parameters, these parameters are not “free” in the way that the coefficients of a Taylor series are free.

On the other hand, when analyzing behavioral data, it can be tempting to use a statistical model with parameters that map in some simple way onto theoretical constructs. For example, assuming Weber’s Law  holds (a power law relationship between physical stimulus magnitude and perceived intensity), one can collect data in some domain of interest, fit a power law function, and compute the Weber constant for that domain. However, if you happen to be studying a domain where Weber’s law does not quite hold, your Weber constant will not be very informative.

In other words, statistical and computational models have different, complementary goals. The point of statistical models is to describe or quantify the observed data. This is immensely useful because extracting key effects or patterns allows us to talk about large data sets in terms of a small number of “effects” or differences between conditions. Such descriptions are best when they focus on the data themselves and are independent of any particular theory – this allows researchers to evaluate any and all theories against the data. Statistical models need to worry about number of free parameters and this is captured by standard goodness-of-fit statistics such as AIC, BIC, and log-likelihood.

In contrast, cognitive models are meant to test a specific theory, so fidelity to the theory is more important than counting the number of parameters. Ideally, the cognitive model’s output can be compared directly to the observed behavioral data, using more or less the same model comparison techniques (R-squared, log-likelihood, etc.). However, because cognitive models are usually simplified, that kind of quantitative fit is not always possible (or even advisable) and a qualitative comparison of model and behavioral data must suffice. This qualitative comparison critically depends on an accurate – and theory-neutral – description of the behavioral data, which is provided by the statistical model. (A nice summary of different methods of evaluating computational models against behavioral data is provided by Pitt et al.,2006).

Jim Magnuson, J. Dixon, and I advocated this kind of two-pronged approach – using statistical models to describe the data and computational models to evaluate theories – when we adapted growth curve analysis to eye-tracking data (Mirman etal., 2008). Then, working with Eiling Yee and Sheila Blumstein, we used this approach to study phonological competition in spoken word recognition in aphasia (Mirman etal., 2011). To my mind, this is the optimal way to simultaneously maximize accurate description of behavioral data and theoretical impact of the research.