We often consciously will our own actions. This experience is so profound that it tempts us to believe that our actions are caused by consciousness. It could also be a trick, however – the mind’s way of estimating its own apparent authorship by drawing causal inferences about relationships between thoughts and actions. Cognitive, social, and neuropsychological studies of apparent mental causation suggest that experiences of conscious will frequently depart from actual causal processes and so might not reﬂect direct perceptions of (...) conscious thought causing action. (shrink)
A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how (...) they pass the Turing Test, but not how, why or whether that makes them feel. (shrink)
Given its ubiquitous presence in everyday experience, it is surprising that the phenomenology of doing—the experience of being an agent—has received such scant attention in the consciousness literature. But things are starting to change, and a small but growing literature on the content and causes of the phenomenology of first-person agency is beginning to emerge.2 One of the most influential and stimulating figures in this literature is Daniel Wegner. In a series of papers and his book The Illusion of Conscious (...) Will (ICW) Wegner has developed.. (shrink)
The objectives of this article are twofold. First, by denying the dualism inherent in attempts to load metaphysical significance on the inner/outer distinction, it defends the view that scientific investigation can approach consciousness in itself, and is not somehow restricted in scope to the outward manifestations of a private and hidden realm. Second, it provisionally endorses the central tenets of global workspace theory, and recommends them as a possible basis for the sort of scientific understanding of consciousness thus legitimised. However, (...) the article goes on to argue that global workspace theory alone does not constitute a fully worked-out objective account of the conscious subject. This requires additional attention to be paid to the issue of embodiment, and to the possibility of indexicality that arises when an instantiation of the global workspace architecture inhabits a spatially localised body. (shrink)
Wegner’s analysis of the illusion of conscious will is close to my own account of how conscious experiences relate to brain processes. But our analyses differ somewhat on how conscious will is not an illusion. Wegner argues that once conscious will arises it enters causally into subsequent mental processing. I argue that while his causal story is accurate, it remains a first-person story. Conscious free will is not an illusion in the sense that this first-person story is compatible with and (...) complementary to a third-person account of voluntary processing in the mind/brain. (shrink)
The commentators' responses to The Illusion of Conscious Will reveal a healthy range of opinions – pro, con, and occasionally stray. Common concerns and issues are summarized here in terms of 11 “frequently asked questions,” which often center on the theme of how the experience of conscious will supports the creation of the self as author of action.
To reduce the likelihood that psychology will develop in a deeply flawed manner, the present article seeks to provide an introduction to Freud?s conception of consciousness because, for among other reasons, his general theory is highly influential in our science and culture and among the best understood by clinicians and experimentalists. The theory is complex and all of its major parts have a bearing on one another; indeed, consciousness has a central place in the total conceptual structure ? as is (...) argued, in effect, throughout the present article. The discussion focuses mainly on how conscious psychical processes differ from processes of the psychical apparatus that do not instantiate the Freudian attribute of consciousness. This intrinsic attribute that belongs to every conscious psychical process is seen as including, along with qualitative content, an unmediated, witting awareness of the psychical process that is directed upon itself. (shrink)
Though merely an essay, I challenge you, gentle reader, by attempting to demonstrate that my own words are not fundamentally different from the conscious thoughts in your own mind: I thus claim to have consciousness and qualia.
The current study aims to separate conscious and unconscious behaviors by employing both online and offline measures while the participants were consciously performing a task. Using an eye-movement tracking paradigm, we observed participants’ response patterns for distinguishing within-word-boundary and across-word-boundary reverse errors while reading Chinese sentences . The results showed that when the participants consciously detected errors, their gaze time for target words associated with across-word-boundary reverse errors was significantly longer than that for targets words associated with within-word-boundary reverse errors. (...) Surprisingly, the same gaze time pattern was found even when the readers were not consciously aware of the reverse errors. The results were statistically robust, providing converging evidence for the feasibility of our experimental paradigm in decoupling offline behaviors and the online, automatic, and unconscious aspects of cognitive processing in reading. (shrink)
Daniel Wegner argues that conscious will is an illusion. I examine the adequacy of his theory of apparent mental causation and whether, if accurate, it suggests that our experience of agency and authorship should be considered illusory. I examine various interpretations of this claim and raise problems for each interpretation. I also distinguish between the experiences of agency and authorship.
The possibility of spectrum inversion has been debated since it was raised by Locke and is still discussed because of its implications for functionalist theories of conscious experience . This paper provides a mathematical formulation of the question of spectrum inversion and proves that such inversions, and indeed bijective scramblings of color in general, are logically possible. Symmetries in the structure of color space are, for purposes of the proof, irrelevant. The proof entails that conscious experiences are not identical with (...) functional relations. It leaves open the empirical possibility that functional relations might, at least in part, be causally responsible for generating conscious experiences. Functionalists can propose causal accounts that meet the normal standards for scientific theories, including numerical precision and novel prediction; they cannot, however, claim that, because functional relationships and conscious experiences are identical, any attempt to construct such causal theories entails a category error. (shrink)
Setting aside the problems of recognising consciousness in a machine, this article considers what would be needed for a machine to have human-like conscious- ness. Human-like consciousness is an illusion; that is, it exists but is not what it appears to be. The illusion that we are a conscious self having a stream of experi- ences is constructed when memes compete for replication by human hosts. Some memes survive by being promoted as personal beliefs, desires, opinions and pos- sessions, leading (...) to the formation of a memeplex (or selfplex). Any machine capa- ble of imitation would acquire this type of illusion and think it was conscious. Robots that imitated humans would acquire an illusion of self and consciousness just as we do. Robots that imitated each other would develop their own separate languages, cultures and illusions of self. Distributed seflplexes in large networks of machines are also possible. Unanswered questions include what remains of consciousness without memes, and whether artificial meme machines can ever transcend the illusion of self consciousness. (shrink)
Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...) organism and a functionally equivalent (Turing Indistinguishable) nonconscious "Zombie" organism: In other words, the Blind Watchmaker, a functionalist if ever there was one, is no more a mind reader than we are. Hence Turing-Indistinguishability = Darwin-Indistinguishability. It by no means follows from this, however, that human behavior is therefore to be explained only by the push-pull dynamics of Zombie determinism, as dictated by calculations of "inclusive fitness" and "evolutionarily stable strategies." We are conscious, and, more important, that consciousness is piggy-backing somehow on the vast complex of unobservable internal activity -- call it "cognition" -- that is really responsible for generating all of our behavioral capacities. Hence, except in the palpable presence of the irrational (e.g., our sexual urges) where distal Darwinian factors still have some proximal sway, it is as sensible to seek a Darwinian rather than a cognitive explanation for most of our current behavior as it is to seek a cosmological rather than an engineering explanation of an automobile's behavior. Let evolutionary theory explain what shaped our cognitive capacity (Steklis & Harnad 1976; Harnad 1996, but let cognitive theory explain our resulting behavior. (shrink)
Contrary to James's emphasis on the sensible continuity of each personal consciousness, our purported "stream," as it presents itself to us, is not accurately described as having a flowing temporal structure; thus Strawson has argued based on how he finds his own consciousness to be. Accordingly, qua object of inner awareness, our consciousness is best characterized as constituted successively by pulses of consciousness separated in time, one from the next, by a momentary state of complete unconsciousness. It seems at times (...) that one's consciousness is flowing along, but this is an illusion that is owed to taking continuities of content, across pulses, for continuity in the process itself of consciousness, and that can be overcome by the proper mode of reflection upon one's consciousness as it is taking place. With reference to James's original account and to commentaries from Dainton and from Tye on Strawson's claims, the present article examines the latter claims, and proposes that Strawson errs in how he gives expression to what he observes firsthand with respect to his consciousness. His own introspective reports indicate that what he describes to be states of complete unconsciousness that directly precede and follow each of his conscious thoughts, are actually totally qualified states of consciousness and so they are not stoppages in the flow of his consciousness. Also, Strawson's special mode of reflection - which he labels "attentive" and speaks of as one's "reflecting very hard" - likely works not to reveal his consciousness to him but, rather, to prevent his apprehending that "phenomenal background," which is there, perhaps always, while he is in the general state that we call "awakeness" and of which each of his states of consciousness partially consists, including the purported states of complete unconsciousness he truly apprehends but misdescribes. (shrink)
The Latin conscius does not translate anything like mind or consciousness. Only in the mid-nineteenth century do we find the first attempts to study consciousness as its own discipline. Wundt, James, and Freud disagreed about how to approach the science of consciousness, although agreeing that psychology was a 'science of consciousness' that takes lived biological experience as its object. The behaviorists vetoed this idea. By the 1950s, for cognitive science, mind (conscious and unconscious) was considered analogous to computer software. Recently, (...) the science of consciousness has returned as Consciousness Studies, a new interdisciplinary synthesis of neuroscience, psychology, philosophy, and cultural anthropology. But what is new in this renaissance of the science of consciousness? New first, second and third person approaches all propose to take consciousness itself as a variable. This approach is as controversial as the nineteenth-century science of consciousness--controversy perhaps inherent to any science of consciousness. (shrink)
Revonsuo argues that current brain imaging methods do not allow us to ‘discover’ consciousness. While all observational methods in science have limitations, consciousness is such a massive and pervasive phenomenon that we cannot fail to observe its effects at every level of brain organization: molecular, cellular, electrical, anatomical, metabolic, and even the ‘higher levels of electrophysiological organization that are crucial for the empirical discovery and theoretical explanation of consciousness’ . Indeed, the first major discovery in that respect was Hans Berger's (...) finding that scalp EEG is massively different between waking and deep sleep, already seven decades ago. We now have perhaps a dozen sophisticated methods for monitoring consciousness-related activity at multiple levels of brain observation. Theoretical progress has come quite rapidly. Recently, E.R. John and colleagues have made fundamental findings using Quantitative EEG, showing consistent brainwide changes as a result of several types of general anaesthetics . John has proposed a neuronal ‘field theory’ to account for those results. Another promising new method involves frequency-tagging of competing stimuli, allowing us to follow the activity of billions of neurons synchronized to particular conscious stimuli, always compared to very similar unconscious input . A fundamental theoretical account of such results has been provided by Tononi & Edelman . Such results and theory are in broad agreement with the cognitive theory proposed by Baars. (shrink)
Philosophers have devoted a great deal of discussion to the question of whether an inverted spectrum thought experiment refutes functionalism. (For a review of the inverted spectrum and its many philosophical applications, see Byrne, 2004.) If Ho?man is correct the matter can be swiftly and conclusively settled, without appeal to any empirical data about color vision (or anything else). Assuming only that color experiences and functional relations can be mathematically represented, a simple mathematical result.