The "mind as computer" has been a dominant and powerful metaphor in cognitive science at least since the middle of the 20th century. Throughout this time, many of us have chafed against this metaphor because it has a tendency to be taken too literally. Framing mental and neural processes in terms of computation or information processing can be extremely useful, but this approach can turn into the extremely misleading notion that our minds work kind of like our desktop or laptop computers. There are two particular notions that have continued to hold sway despite mountains of evidence against them and I think their perseverance might be, at least in part, due to the computer analogy.
The first is modularity or autonomy: the idea that the mind/brain is made up of (semi-)independent components. Decades of research on interactive processing (including my own) and emergence have shown that this is not the case (e.g., McClelland, Mirman, & Holt, 2006; McClelland, 2010; Dixon, Holden, Mirman, & Stephen, 2012), but components remain a key part of the default description of cognitive systems, perhaps with some caveat that these components interact.
The second is the idea that the mind engages in symbolic or rule-based computation, much like the if-then procedures that form the core of computer programs. This idea is widely associated with the popular science writing of Steven Pinker and is a central feature of classic models of cognition, such as ACT-R. In a new paper just published in the journal Cognition, Gary Lupyan reports 13 experiments showing just how bad human minds are at executing simple rule-based algorithms (full disclosure: Gary and I are friends and have collaborated on a few projects). In particular, he tested parity judgments (is a number odd or even?), triangle judgments (is a figure a triangle?), and grandmother judgments (is a person a grandmother?). Each of these is a simple, rule-based judgment, and the participants knew the rule (last digit is even; polygon with three sides; has at least one grandchild), but they were nevertheless biased by typicality: numbers with more even digits were judged to be more even, equilateral triangles were judged to be more triangular, and older women with more grandchildren were judged to be more grandmotherly. A variety of control conditions and experiments ruled out various alternative explanations of these results. The bottom line is that, as he puts it, "human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input."
It's probably too much to hope that this paper will end the misuse of the computer metaphor, but I think it will be a nice reminder of the limitations of this metaphor.
Dixon JA, Holden JG, Mirman D, & Stephen DG (2012). Multifractal dynamics in the emergence of cognitive structure. Topics in Cognitive Science, 4 (1), 51-62 PMID: 22253177The first is modularity or autonomy: the idea that the mind/brain is made up of (semi-)independent components. Decades of research on interactive processing (including my own) and emergence have shown that this is not the case (e.g., McClelland, Mirman, & Holt, 2006; McClelland, 2010; Dixon, Holden, Mirman, & Stephen, 2012), but components remain a key part of the default description of cognitive systems, perhaps with some caveat that these components interact.
The second is the idea that the mind engages in symbolic or rule-based computation, much like the if-then procedures that form the core of computer programs. This idea is widely associated with the popular science writing of Steven Pinker and is a central feature of classic models of cognition, such as ACT-R. In a new paper just published in the journal Cognition, Gary Lupyan reports 13 experiments showing just how bad human minds are at executing simple rule-based algorithms (full disclosure: Gary and I are friends and have collaborated on a few projects). In particular, he tested parity judgments (is a number odd or even?), triangle judgments (is a figure a triangle?), and grandmother judgments (is a person a grandmother?). Each of these is a simple, rule-based judgment, and the participants knew the rule (last digit is even; polygon with three sides; has at least one grandchild), but they were nevertheless biased by typicality: numbers with more even digits were judged to be more even, equilateral triangles were judged to be more triangular, and older women with more grandchildren were judged to be more grandmotherly. A variety of control conditions and experiments ruled out various alternative explanations of these results. The bottom line is that, as he puts it, "human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input."
It's probably too much to hope that this paper will end the misuse of the computer metaphor, but I think it will be a nice reminder of the limitations of this metaphor.
Lupyan, G. (2013). The difficulties of executing simple algorithms: Why brains make mistakes computers don’t. Cognition, 129(3), 615-636. DOI: 10.1016/j.cognition.2013.08.015
McClelland, J.L. (2010). Emergence in Cognitive Science. Topics in Cognitive Science, 2 (4), 751-770 DOI: 10.1111/j.1756-8765.2010.01116.x
McClelland JL, Mirman D, & Holt LL (2006). Are there interactive processes in speech perception? Trends in Cognitive Sciences, 10 (8), 363-369 PMID: 16843037