2010-05-06
Describing zombies
Reply to Derek Allan
Derek -

I think we're saying the same thing, in a funny way (you may not agree).

The science fiction metaphors aren't wholly misleading, and they do warn us not to be guided too much by our visceral reactions ...

The 'funcitonal zombie' version of the experiment reveals this a little bit - it takes the problem towards the AI debate, which also has an SF quality, and (I think) can be mis-directed by the same confusions.

If we try to find some scientific - as in empirically based, computational, based on shared agreement about facts etc. - criterion for distinguishing between zombies (functional or traditional) and conscious people, we're bound to fail.  I think this is the point of your argument, and the point of the 'zombie world' argument.

However, we do - routinely, and, we feel, reliably - attribute consciousness ot one another.  So what's going on?

The answer, I think, is that it's a normative category, not a scientific or empirical, or even epistemological one.  This is, intuitively, very hard to swallow - because we so strongly associate our capacity to scientifically theorise about the world with our capacity to have certain phenomenal experiences.  This is obviously a big issue, but one that I think can be addressed.  (Though not in this post.)

Consciousness is something we attribute to interlocutors - to people we can talk to (and share philosophical or scientific or other theories with).  This is why it is linked to intention - to the capacity to have intentional states, which we also attribute (in the central case) to interlocutors.  We attribute them to others (animals, cats, word processors ...) but only on the basis of behaviour (and therefore corrigibly - re Kripke).  To talk with you 'properly', I must attribute intentional states to you (e.g. that you believe what you say) unambiguously.  If I thought you were a zombie, I might go through the same 'motions', but it would be to interact with you in a different way - to treat you as a machine (as I might treat a voice-operated robot).  Your complexity (with respect to your semantic capacities) isn't an issue here - in the past it might have supported a comfortingly reliably pragmatic distinction, but we can make very complex machines now.

When I attribute phenomenal states to you, I am attributing something like unarticulated functional precursors to intentional states and to articulation - I am attributing the same capacity to speak as I feel myself to have.  I am thinking something like 'I can talk to you because you see the same world as I do'.

If I imagined you were a zombie, I would, simultaneously, be imagining that I couldn't really talk to you (although, as I said, I might sound as though I was - in the sense that I could be sending you 'speech signals' designed to elicit programmed responses, some of which would also sound like speech).

So there's something like this going on:

'I can talk with you' must be true in any real conversation (but not in a 'conversation' with a zombie, where only the functions of the signals are at stake, and truth and falsity don't arise).

If I can talk with you, then I can attribute intentional states to you (and you to me).

Some of these are intentional states that I can only attribute to something/one who is 'conscious'.

I know this seems a bit unsatisfactory, but I think it's unsatisfactory in the right way - what it fails to support is an intuition which is, actually, unreliable.  This is the intuition that the 'similiarity' of our phenomenal states (as opposed to our intentional states) somehow underlies - as in epistemologically precedes - our ability to talk to one another.  I think it's the other way around:  our ability to talk to one another convinces us that we share the phenomenal states as well as the intentional ones - that I see a similar 'red patch' to you, as well as coming to believe that there is a ball on the grass.  This isn't necessary - our phenomenal states need only be such as to support our agreements about intentional states, and need not be any more commensurate.  However much we burrow down, e.g. by trying to develop a language of 'sense data', we never get to the phenomenal level.  We just get to agreements about what to say about it, to other intentional states.

Alex.

(Arthur is my surname - don't worry, it confuses lots of people!).