A remarkable hypothesis has recently been advanced by Libet and promoted by Eccles which claims that there is standardly a backwards referral of conscious experiences in time, and that this constitutes empirical evidence for the failure of identity of brain states and mental states. Libet's neurophysiological data are critically examined and are found insufficient to support the hypothesis. Additionally, it is argued that even if there is a temporal displacement phenomenon to be explained, a neurophysiological explanation is most likely.
Beginning with Thomas Nagel, various philosophers have propsed setting conscious experience apart from all other problems of the mind as ‘the most difficult problem’. When critically examined, the basis for this proposal reveals itself to be unconvincing and counter-productive. Use of our current ignorance as a premise to determine what we can never discover is one common logical flaw. Use of ‘I-cannot-imagine’ arguments is a related flaw. When not much is known about a domain of phenomena, our inability to imagine (...) a mechanism is a rather uninteresting psychological fact about us, not an interesting metaphysical fact about the world. Rather than worrying too much about the meta-problem of whether or not consciousness is uniquely hard, I propose we get on with the task of seeing how far we get when we address neurobiologically the problems of mental phenomena. (shrink)
Philosophy, in its traditional guise, addresses questions where experimental science has not yet nailed down plausible explanatory theories. Thus, the ancient Greeks pondered the nature of life, the sun, and tides, but also how we learn and make decisions. The history of science can be seen as a gradual process whereby speculative philosophy cedes intellectual space to increasingly wellgrounded experimental disciplines—first astronomy, but followed by physics, chemistry, geology, biology, archaeology, and more recently, ethology, psychology, and neuroscience. Science now encompasses plausible (...) theories in many domains, including large-scale theories about the cosmos, life, matter, and energy. The mind’s turn has now come. The classical ‘‘mind’’ questions center on free will, the self, consciousness, how thoughts can have meaning and ‘‘aboutness,’’ and how we learn and use knowledge. All these matters interlace with questions about morality: where values come from, the roles of reason and emotion in choice, and the wherefore of responsibility and punishment. The vintage mind/body problem is a legacy of Descartes: if the mind is a completely nonphysical substance, as he thought, how can it interact causally with the physical brain? Since the weight of evidence indicates that mental processes actually are processes of the brain, Descartes’ problem has disappeared. The classical mind/ body problem has been replaced with a range of questions: what brain mechanisms explain learning, decision making, self-deception, and so on. The replacement for ‘‘the mind-body problem’’ is not a single problem; it is the vast research program of cognitive neuroscience. The dominant methodology of philosophy of mind and morals in the twentieth.. (shrink)
Using the Godel incompleteness result for leverage, Roger Penrose has argued that the mechanism for consciousness involves quantum gravitational phenomena, acting through microtubules in neurons. We show that this hypothesis is implausible. First the Godel result does not imply that human thought is in fact non-algorithmic. Second, whether or not non-algorithmic quantum gravitational phenomena actually exist, and if they did how that could conceivably implicate microtubules, and if microtubules were involved, how that could conceivably implicate consciousness, is entirely speculative. Third, (...) cytoplasmic ions such as calcium and sodium are almost certainly present in the microtubule pore, barring the quantum-mechanical effects Penrose envisages. Finally, physiological evidence indicates that consciousness does not directly depend on microtubule properties in any case, rendering doubtful any theory according to which consciousness is generated in the microtubules. (shrink)
Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose (...) input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as algorithmic computation, and the second as information processing computation. (shrink)
Irvin Rock's hypothesis that certain stages of perceptual processing resemble problem solving in cognition is contrasted to some recent work in computer vision (Marr, Ullman) which tries to reduce intelligence in perception to computational organization. The focal example is subjective contours which Marr thought could be handled by computational modules without descending control, and which Rock thinks are the outcome of intelligent processing.
The explanation of a child's discriminate responses to his environment turns on ascribing to the child a perceptual discrimination which counts certain things as more similar to one another than to some other thing. As Quine forcefully puts it:If an individual is to learn at all, differences in degree of similarity must be implicit in his learning pattern. Otherwise any response, if reinforced, would be conditioned equally and indiscriminately to any and every future episode, all these being equally similar.Now for (...) those determined to cleave to behaviourist canons, the problem is to use ‘perceptual similarity’ in explaining the subject's discriminating responses in a way which does not imply the existence of mental states and entities. What this really means is that the behaviourist must reconstruct the notion of ‘perceptual similarity', purifying it of its mentalistic dimension. So long as physicalism is a reasonable position, and while we are awaiting and abetting the neurophysiological millennium, the behaviourist's project is of significant moment. Now in Word and Object Quine does not seriously attempt to provide behavioural criteria for a subject's perceiving similarities, and he provisionally permits himself the mentalistic idiom he avows finally to eschew. (shrink)