Friday, 18 May 2012

Mind: Some problems with the Classical Computer Metaphor - Part one

[part two is here]
Over the last year of my intercalated psychology degree I have (admittedly superficially) been introduced to a fascinating array of topics and studies from V S Ramachandran's work on phantom limbs to Jayne's rather bizarre 'bicameral mind' to British MP Christopher Mayhew tripping on Mescaline.

But whether it was psycholinguistics, embodied cognition, perceptual control theory, consciousness, conceptual and historical issues or cognitive neuroscience, a common theme seems to have arisen throughout: namely the problem with viewing mind as analogous with a classical computer


In retrospect, this commonality, is in part due to my choice of modules, and I doubt I would have come across such issues studying 'clinical communication' or 'illness cognition', as might be expected from a Medic. As such, it may be suggested that to argue this problem universally permeates all aspects of psychology, is somewhat of an overstatement. Yet what all my modules have in common, and what I would argue is essentially the main purpose of psychology, is that they are based in the ongoing struggle to conceptualise and understand the mind. 

With exams fast approaching, I am in need of some kind of procrastinatory activity that will still vaguely count as revision, so I thought I would try to clarify my (limited) understanding of the computer metaphor and discuss a few of the problems I have encountered from my studies. 

What is the the classical computational approach to mind?
The classical computational view is broadly that the mind (i.e. our cognitive processing) is analogous to that of a classical computer. That is, cognition involves the amodal, rule-based manipulation of abstract symbols. On this view, the sensory and motor systems are seen as irrelevant, in the sense that they simply provide the perceptual/motor inputs/outputs to other parts of the brain which process them as abstract symbols. The perceptual and motor modalities (nor the rest of the body or environment for that matter) are thought not to do any of the hard working cognitive stuff. They are, in effect, not part of the 'mind-system'.

This approach has, until recently, been the standard model influencing most cognitive-neuroscientific theories. Such theories arose in part as a reaction to the Behaviourist movement which dismissed mental life, insisting behaviour was the only empirical scientific means for psychology to draw upon. Cognitive neuroscience was also heavily influenced by advances in computing, from the power of a Universal Turing Machine to the apparent success in creating seemingly 'intelligent' computers.  

So what's the problem?
For most, the computer metaphor seemed a sensible means to view the workings of the mind and in fact was understood not just figuratively but in a literal sense - that is the mind was seen as the software running on the hardware of the brain, in much the same way that software runs on the hardware of a digital computer. It gradually became apparent however, that this exciting new paradigm faced much of the old problems that troubled Descartes and the Behaviourists - and some novel ones on top.  For me, there seem three broad intertwined problems (and no doubt many more) with this approach:

1) Conceptual and philosophical issues with amodal abstract symbol manipulation
2) Strengths of competing paradigms, such as embodied cognition
3) Results from comparisons between the practical application of amodal vs embodied cognition in artificial intelligence and robotics 

Philosophy
Cartesian Materialism 
One problem with the traditional cognitive paradigm is that it seems to have failed to overcome the dualism, it is trying to refute. Whilst rejecting Descartes' supernatural dualism of mind, the cognitive movement retained the Cartesian theatre in which the sense of self resided, somewhere in the brain.  The computer metaphor placed the sense of self firmly inside the head of the individual. As Eric Charles points out over at Fixing Psychology, the use of terms such 'representation' and 'retrieval' still maintain the dualist paradigm - to whom are the symbols being represented? Where are the memories being retrieved to? 

This is more than mere philosophical musing. It shows that we still incorrectly conceptualise the mind as a being 'housed' somewhere in the brain, rather than existing as a dynamical system incorporating the brain, body and even environment. As long as this dualistic paradigm remains, it will continue to limit and restrain the types of research and conclusions we can draw.

Grounding Problem 
The symbol grounding problem concerns itself with asking how abstract symbols can possess the meaning of their referent. In other words, how are the symbols in our heads mapped back onto the real word? The issue arises when you appreciate that the symbols in a computer are manipulated simply based on their shape not on any kind of intrinsic meaning, just like the words (symbols) on this page have no meaning in themselves, but are only made sense of 'in our heads'. Thus, any computational output in response to input relies on the mindless, non mental application of formal rules or algorithms to generate a new set of symbol tokens or symbol token strings.

For the classic approach, the meaning of one symbol could only be derived from its relations with other abstract symbols, which themselves have no meaning and so must be understood in terms of other abstract meaningless symbols ad infinitum. Thus, we are faced with the problem of infinite regress.

Harnad provides an excellent example of the grounding problem in his adaptation of Searle's famous thought experiment. Imagine we are presented with the task of understanding written directions in a foreign language, say Chinese. All we have is a Chinese-Chinese dictionary. Every (abstract) Chinese symbol can only ever be explained in terms of its reference to another abstract Chinese symbol or word and so on. Since the symbols are not grounded in their referents there is no meaning to the symbols. 

Transduction Problem 
This is the problem of how perceptual experiences are translated into the arbitrary symbols that are used to represent concepts. Traditionally this has been solved by way of 'divine intervention' on the part of the programmer by assigning certain attributes to atomic concepts. This is also linked to the problem of parsimony, in that we may well ask what is the purpose of amodal symbols? If we have already generated perceptual representations or if we can use the world as its own best model, why do we need to generate amodal symbols at all?   

Alternative approaches, for example Barsalou's Perceptual Symbols System theory, argue that meaning stems from direct coupling of perceptual symbols to the environment and the affordances it offers, hence solving the grounding problem and eliminating the need for transduction. For a non-representational approach, which faces up more seriously to the problem of the Cartesan theatre and representation see the blog at Notes From Two Scientific Psychologists.

Unfalsibiability of a Universal Turing Machine 
Amodal theories have been criticised on the grounds that they are unfalsifiable. In other words there is no evidence, such as that from embodied cognition (see part two), that cannot be accounted for amodally. This is because, as Barsalou points out, amodal theories have Turing machine power and hence can express any describable pattern, meaning no result can disconfirm them. Importantly, they do not predict the kinds of results found in the current literature a priori but rather are simply adapted or extended a posteriori in order to account for them, leading to the problem of parsimony. Amodal theories explain everything and therefore nothing.

In part two I shall discuss the empirical evidence against the amodal approach and compare the success in the application of the two competing paradigms in robotics and Artificial Intelligence.

 Oh and here's British MP, Christopher Mayhew, on Mescaline...

4 comments:

  1. "Beyond the Brain" had a section about how Turing's work has been misinterpreted. Apparently he thought he was building a model of what a HUMAN computer (e.g., a person paid to do computations) was capable of doing with a pencil and paper. The Universal Turing Machine was not supposed to be the computer, it was just an unlimited amount of pencil and paper for a person-doing-computations. The section is well written, and I found it oddly insightful, both about what Turing was up to and about how cognitive psych had gone so wrong.

    ReplyDelete
  2. No way! SEP let me down!

    I keep meaning to read it. Is it more accessible than Chemero? I seem to remember you reviewed 'beyond the brain' - do you have a link please?

    Chris

    ReplyDelete
  3. Much more accessible than Chemero :- )

    I made a few posts, and still owe two more. Wow, actually, I did a lot of posts!

    http://fixingpsychology.blogspot.com/2011/09/beyond-brain-intro.html

    http://fixingpsychology.blogspot.com/2011/09/beyond-brain-embodied-minds.html

    http://fixingpsychology.blogspot.com/2011/10/beyond-brain-embodied-minds-and.html

    http://fixingpsychology.blogspot.com/2011/10/beyond-brain-ecological-psychology.html

    http://fixingpsychology.blogspot.com/2011/12/beyond-brain-anti-anthropomorphism.html

    http://fixingpsychology.blogspot.com/2012/01/beyond-brain-review-out.html

    ReplyDelete
  4. thanks
    very interesting article

    ReplyDelete