Monday, October 21, 2013

The mind is not a (digital) computer

The "mind as computer" has been a dominant and powerful metaphor in cognitive science at least since the middle of the 20th century. Throughout this time, many of us have chafed against this metaphor because it has a tendency to be taken too literally. Framing mental and neural processes in terms of computation or information processing can be extremely useful, but this approach can turn into the extremely misleading notion that our minds work kind of like our desktop or laptop computers. There are two particular notions that have continued to hold sway despite mountains of evidence against them and I think their perseverance might be, at least in part, due to the computer analogy.

The first is modularity or autonomy: the idea that the mind/brain is made up of (semi-)independent components. Decades of research on interactive processing (including my own) and emergence have shown that this is not the case (e.g., McClelland, Mirman, & Holt, 2006; McClelland, 2010; Dixon, Holden, Mirman, & Stephen, 2012), but components remain a key part of the default description of cognitive systems, perhaps with some caveat that these components interact.

The second is the idea that the mind engages in symbolic or rule-based computation, much like the if-then procedures that form the core of computer programs. This idea is widely associated with the popular science writing of Steven Pinker and is a central feature of classic models of cognition, such as ACT-R. In a new paper just published in the journal Cognition, Gary Lupyan reports 13 experiments showing just how bad human minds are at executing simple rule-based algorithms (full disclosure: Gary and I are friends and have collaborated on a few projects). In particular, he tested parity judgments (is a number odd or even?), triangle judgments (is a figure a triangle?), and grandmother judgments (is a person a grandmother?). Each of these is a simple, rule-based judgment, and the participants knew the rule (last digit is even; polygon with three sides; has at least one grandchild), but they were nevertheless biased by typicality: numbers with more even digits were judged to be more even, equilateral triangles were judged to be more triangular, and older women with more grandchildren were judged to be more grandmotherly. A variety of control conditions and experiments ruled out various alternative explanations of these results. The bottom line is that, as he puts it, "human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input."

It's probably too much to hope that this paper will end the misuse of the computer metaphor, but I think it will be a nice reminder of the limitations of this metaphor.

ResearchBlogging.org Dixon JA, Holden JG, Mirman D, & Stephen DG (2012). Multifractal dynamics in the emergence of cognitive structure. Topics in Cognitive Science, 4 (1), 51-62 PMID: 22253177
Lupyan, G. (2013). The difficulties of executing simple algorithms: Why brains make mistakes computers don’t. Cognition, 129(3), 615-636. DOI: 10.1016/j.cognition.2013.08.015
McClelland, J.L. (2010). Emergence in Cognitive Science. Topics in Cognitive Science, 2 (4), 751-770 DOI: 10.1111/j.1756-8765.2010.01116.x
McClelland JL, Mirman D, & Holt LL (2006). Are there interactive processes in speech perception? Trends in Cognitive Sciences, 10 (8), 363-369 PMID: 16843037

8 comments:

  1. Well, Lupyan's argument would only refute the claim the brain is the digital computer on the level of cognitively penetrable and learnable knowledge (something in the vicinity of the System 2 thinking). But if this is what the computational theory of mind says, then Marr is not a computationalist, which is odd, to say the least.

    Maybe my own way of framing the issue of CTM (see my mitpress.mit.edu/books/explaining-computational-mind) is so liberal that it is almost impossible to refute it but I think one would really need to refute the claims about perception (e.g., early vision) first. Now, ACT-R modeling may be really simplistic in modeling agent-level attention-available knowledge but it need not. If you look at the recent work on basal ganglia, a lot of work seems to suggest that there is something like production rules there (see for example Eliasmith's SPAUN model) -- but it's a huge idealization to say it's just a production system as it does much more.

    ReplyDelete
  2. I'm a little surprised by this line of argument because symbolic, rule-based processing is more commonly proposed for higher-level cognitive functions like categorization, language, problem solving, etc. Perception and action are actually much harder to do with digital computers: consider how much success there has been in getting computers to play chess or do arithmetic compared to automatic speech recognition or motor control (with a few exceptions, physical human-robot interactions are quite dangerous because standing in an unexpected place could get you a broken bone when the robot’s arm comes around).

    It is also useful to distinguish between computation in general and digital computation specifically. The argument here is that the mind/brain does not do digital or symbolic rule-based computation (and I think this argument is even easier to make for perception and action). As far as I’m concerned, this does not challenge the view that the mind is a computational device, it just does analog context-sensitive computation. The related point is that one can approximate or simulate analog computation using a digital device. That is certainly a key principle for those of us who use simulations of PDP/connectionist models implemented on digital computers to test theories about analog mental processes.

    I also think that it is not quite enough to say that the mind/brain does “something like” production rules. People are generally pretty good at identifying even numbers, triangles, grandmothers, etc.; but (digital) computers are really, really good at it. In other words, the results of Lupyan (2013) are consistent with “something like” production rules, but not with actual production rules. On my reading, the subtle but consistent deviations from rule-like behavior that Lupyan documented (and others have shown in other domains, like formation of the past tense) are critical evidence against production rules.

    ReplyDelete
  3. Hm, what I really mean is that digital computation is not equivalent to consciously acquired or learned symbolic rules. For all I know, my calculator is not able to learn how to compute addition but it easily adds numbers. I think it's a popular fallacy to think that learned or consciously available rules are the core of the digital computation. Actually, it seems that the core brain computation makes it possible to have multiple virtual machines running on it, including the ones that seem to operate on symbolic rules. In reality, however, it turns out that this is a huge idealization (and only Newell would be surprised).

    I certainly agree that it's useful to distinguish between computation in general and digital computation. But digital computation simply does not imply learnability of rules. In short, I could have a code that is implemented as a finite state automaton but completely unable to learn (very easily); I could also have a digital code that tries to use advanced statistics to recognize patterns in a completely digital manner but fails to learn the rules that Lupyan thinks are so obvious in his data. Well, not necessarily. Machine learning using SVM might not see these patterns, and failures will abound, but it won't tell you that there is no digital machine learning. It simply does not follow. It's trivially easy to have a digital algorithm that fails to recognize the pattern in a way human users deem appropriate. Vast majority of machine learning has this problem.

    While I also believe that analog processing is important failures to learn digital rules are not sufficient to prove that analog processing is at the bottom of human information processing.

    ReplyDelete
  4. I agree entirely that failures to learn digital rules is not sufficient to provide that analog processing is all there is. As I point out in the paper, my larger goal is to shift the focus *away* from assuming that it's symbols all the way down and hence symbolic rules are the default (working with such an assumption would require one to explain - among other things - why even simple digital stuff is hard for people), and *toward* attempting to understand how it is that people manage to learn rules.

    Btw, it is interesting that von Neumann himself, in his posthumously published Lecture "The Computer and the Brain," took the position that the brain cannot really be performing the sort of context-independent symbolic operations that digital computers are based on. One of his arguments -- and one that I have not seen elsewhere is that neural signalling does not allow for the kind of precision required to do symbolic operation. The reason
    digital computers (even in his day) needed to be precise within 12
    orders of magnitude isn't because anyone cared about the answer to a
    problem being right to 12 decimal places, but because in breaking
    apart a problem into simple logical operations, even simple problems
    became hundreds or thousands of steps and even a tiny error would
    quickly multiply. von Neumann recognized that with the accuracy of
    neural spike trains being accurate to at most 3 orders of magnitude,
    computations as abstract logical operations wouldn't work. It's
    really a remarkably simple and prescient argument.

    The irony is that many people think he argued for exactly the
    opposite. What's also kind of amazing is that the cover of at least my edition is illustrated with an apple and an orange ;)
    http://www.amazon.com/The-Computer-Brain-Silliman-Memorial/dp/0300084730

    ReplyDelete
    Replies
    1. Oh, and this is Gary Lupyan.

      Delete
    2. Another point is that von Neumann dealt with decimal digital computers which were really unreliable... For contemporary readers, it's really stunning they tried to produce decimal computers and not binary digital computers, so the error argument gets a bit lost in that context (I certainly did not remember it).

      Yes, the really interesting thing is how it's at all possible to learn rules and operate on symbols. They are definitely not self-explanatory entities to be posited.

      Delete
  5. Your brain requires two things to survive: gas and activation. Auditory activation (listening to nature appears to be, clicks of a metronome, or even Mozart in a major crucial) in the left head comes up through the human brain stem over to the appropriate brain and the other way round for the right ear.
    caterpillar colombia

    ReplyDelete
  6. Many people think their brains are as good as a computer because we tend to perceive memories. The memory seems so realistic that we become confident that it actually happened even though the actual event could be completely different or not exist at all. This is very common and tends to do with flashbulb memory. Flashbulb memory is when someone recalls very specific details and images about a certain event. One problem with flashbulb memory is that over time our memory tends to decay. Even though the memories changed, our vividness of the memory still seems so real that it is believed to still be true causing the person to forget the original memory. That is why our memory is not nearly as good as a computer because a computer keeps all original data without having to struggle with decaying memory.

    ReplyDelete