2010-04-05
Describing zombies
Reply to Derek Allan
DEFINING "ZOMBIE"

DA: "I got as far as the bit about zombies not "feeling" and wondered what that had to do with the definition of zombies (as well as what it meant)."
Doesn't "zombie" mean a creature that is behaviorally* indistinguishable from us, but does not feel?

If so, read on (i.e., the dialogue you began reading). If not, then what do you mean by a "zombie"?

(My own view is only nonstandard in that I take feeling, not "intentionality," to be the mark of the mental (and of "intentionality"!), and I see the only difference between a system that has "intrinsic intentionality" and a system that has merely "derivative intentionality" as the fact that one feels whereas the other merely "functs" -- i.e., behaves functionally indistinguishably from a system that feels, but does not feel.)

I have no particular view on whether there can really be zombies, by the way (I rather think not). But if there cannot be zombies, I want to know (causally, functionally, adaptively) why not? And if there can be zombies, I want know (causally, functionally, adaptively) why we are not zombies (i.e., what the causal.functional/adaptive advantage is, of our not being zombies).
*The distinction between a creature that is behaviorally indistinguishable from any of us and a creature that is physically indistinguishable from any of us (i.e., T5, below) is not particularly informative unless the functional significance of the physical but nonbehavioral differences is explained, in which case we are right back where we started... 
Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of IndistinguishablesJournal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and Artificial Intelligence") 
Abstract: Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory by the data. Level t1 is clearly too underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached, however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which are inherent in this empirical domain, even though Turing hardly mentions them.
Stevan Harnad