Dennett, D.C. (1996)
Kinds of Minds : Toward an Understanding of Consciousness
Basic Books / Harper Collins, New York (review)
I enjoy Dennett's writing style. He is always clear, often humouress, but never wanders from his philosophical mission. In this instance the mission was to encapsulate a description of (his take on) the operation and capabilities of different minds in a short text. For Dennett there are all "kinds of minds", each with its own level of sophistication, culminating in the self-reflecting, language-wielding human mind.
Dennett leaves aside arguments concerning "other minds", taking as his starting point the idea that although we can only know our own mind from the inside, we can make a pretty safe bet that others have minds that function just as ours does. We need not fear that we are alone in being "mind havers".
What kinds of mind are there? Dennett starts by explaining "the intentional stance". This is adopted by us whenever we treat another organism, or even an artefact as if it had a mind... "My computer wants to beat me at chess so it will try to take my queen." If we can predict the behaviour of an artefact by adopting this stance, the artefact is "intentional". That is, it is helpful for us to attribute beliefs and desires to the artefact in order for us to understand its behaviour.This only works properly if we understand what the appropriate beliefs and desires are, and how these map into the behaviour of the system in question.
To understand this mapping, it doesn't really matter if we know exactly how the agent "thinks", as long as we have a rough idea of the kinds of behaviours that would result in the agent meeting what we believe to be its goals. These goals may be either designed by humans (as in the case of chess-playing computers) or by evolution (as in the case of prey fleeing predators). The intentional stance is therefore pretty useful, but it also leads us to make some common mistakes...
Since we are language users, we are able to think very precise thoughts. It is not necessarily the case that animals or magentic compasses need to think such precise thoughts in order to behave as they do. These precise thoughts are simply tools that we use. Other organisms and artefacts need not have minds that behave as ours in order to function in many ways as if they had minds like ours. (What is fascinating about this whole feature of minds, is that our sophisticated minds are made of billions of microscopic intentional agents, each operating in their own basic and quite un-human-mind-like ways.)
A mind can be encapsulated in a body... simple organisms seek out food, follow chemical gradients, react to damaging environmental conditions and execute a host of other behaviours. We still possess such a "mind" in our own bodies... it allows our heart to beat, our lungs to breathe and our eyes to blink without conscious thought. It also allows us to vomit when we eat something foul. Such minds have evolved in our bodies over the 4.5 billion years of evolution.
The ability to make predictions about the future is useful and obviously ripe for exploitation by evolution. Sensors that look outside the body (after taste --- touch, smell, sound, sight) evolved as they conferred considerable advantages on their owners. Each is an improvement on the last when consideraing an organism's ability to predict the future. A kind of mind is built into these structures also, it allows a creature to react to its surroundings, rather than just to its own internal state. (I prefer to think of this as "bringing the environment into the organism".)
A feasible next step: the development of organisms that can change their structure over a lifetime. Rather than depending soley on evolution to build machines that are hard-wired from birth, learning of some kind takes place. That is, some organisms evolved the ability to re-enforce positive behaviours, and condition themselves against unhelpful or dangerous behaviours during their lifetimes. Organisms became hard-wired by evolution to be "soft-wired" in beneficial ways.
The next step is for an organism to evolve so as to be able to "try out" various behaviours using an internal representation system, before attempting to execute one of them. This allows an organim to avoid making costly mistakes that may disadvantage, damage or completely destroy it. Such deliberation can operate through a set of labels that an organism develops. These labels really allow an agent to "think" in the way that we humans think. We can organise our thoughts, we can reflect upon them and their significance. Not only do we have a label for "food" we can form a complete internal network of relationships between food and a other concepts. We can do this very quickly, but if time allows, we can dwell on abstract problems for a very long time. How clever of us!
In addition to our internal labels, we have developed systems for placing labels in the environment, thereby allowing our minds to manipulate physical artefacts without needing to tax our memories too greatly... we can "see" concepts and organise them, before we decide to take action. we can communicate ideas and do lots of things organisms with other kinds of minds can't do.
So, what do I think of this book? The material it presented was not earth-shattering to me. However the text is a lucid and persuasive argument for a continuum of minds. It makes no hard claims about moral questions and their answers although it touches briefly upon the concept of "pain" and whether other animals can experience it. The only conclusion drawn is that other animals may indeed feel pain... but it is not "pain" in the sense that we humans understand it. No human behavioural guidelines are suggested (which is fine --- this is not a book on ethics or morals).
Overall, a good step ladder to thinking about the development of consciousness.
Alan Dorin, 15 June 06