Discusses recent work on representationalism, including: the case for a representationalist theory of consciousness, which explains consciousness in terms of content; rivals such as neurobiological type-type identity theory (Papineau, McLaughlin) and naive realism (Allen, Campbell, Brewer); John Campbell and David Papineau's recent objections to representationalism; the problem of the "laws of appearance"; externalist vs internalist versions of representationalism; the relation between representationalism and the mind-body problem.
John Campbell investigates how consciousness of the world explains our ability to think about the world; how our ability to think about objects we can see depends on our capacity for conscious visual attention to those things. He illuminates classical problems about thought, reference, and experience by looking at the underlying psychological mechanisms on which conscious attention depends.
The significance of consciousness in modern science is discussed by leading authorities from a variety of disciplines. Presenting a wide-ranging survey of current thinking on this important topic, the contributors address such issues as the status of different aspects of consciousness; the criteria for using the concept of consciousness and identifying instances of it; the basis of consciousness in functional brain organization; the relationship between different levels of theoretical discourse; and the functions of consciousness.
This chapter discusses the main types of so-called ’subjective measures of consciousness’ used in current-day science of consciousness. After explaining the key worry about such measures, namely the problem of an ever-present response bias, I discuss the question of whether subjective measures of consciousness are introspective. I show that there is no clear answer to this question, as proponents of subjective measures do not employ a worked-out notion of subjective access. In turn, this makes the problem of (...) response bias less tractable than it might otherwise be. (shrink)
Neurological syndromes in which consciousness seems to malfunction, such as temporal lobe epilepsy, visual scotomas, Charles Bonnet syndrome, and synesthesia offer valuable clues about the normal functions of consciousness and ‘qualia’. An investigation into these syndromes reveals, we argue, that qualia are different from other brain states in that they possess three functional characteristics, which we state in the form of ‘three laws of qualia’ based on a loose analogy with Newton's three laws of classical mechanics. First, they (...) are irrevocable: I cannot simply decide to start seeing the sunset as green, or feel pain as if it were an itch; second, qualia do not always produce the same behaviour: given a set of qualia, we can choose from a potentially infinite set of possible behaviours to execute; and third, qualia endure in short-term memory, as opposed to non-conscious brain states involved in the on-line guidance of behaviour in real time. We suggest that qualia have evolved these and other attributes because of their role in facilitating non-automatic, decision-based action. We also suggest that the apparent epistemic barrier to knowing what qualia another person is experiencing can be overcome simply by using a ‘bridge’ of neurons; and we offer a hypothesis about the relation between qualia and one's sense of self. (shrink)
Consciousness is a mongrel concept: there are a number of very different "consciousnesses." Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state. The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action. These concepts are often partly or totally conflated, with bad results. This target article uses as an example a form of reasoning about a function of (...) "consciousness" based on the phenomenon of blindsight. Some information about stimuli in the blind field is represented in the brains of blindsight patients, as shown by their correct "guesses," but they cannot harness this information in the service of action, and this is said to show that a function of phenomenal consciousness is somehow to enable information represented in the brain to guide action. But stimuli in the blind field are BOTH access-unconscious and phenomenally unconscious. The fallacy is: an obvious function of the machinery of access-consciousness is illicitly transferred to phenomenal consciousness. (shrink)
It is well known that the nature of consciousness is elusive, and that attempts to understand it generate problems in metaphysics, philosophy of mind, psychology, and neuroscience. Less appreciated are the important – even if still elusive – connections between consciousness and issues in ethics. In this chapter we consider three such connections. First, we consider the relevance of consciousness for questions surrounding an entity’s moral status. Second, we consider the relevance of consciousness for questions surrounding (...) moral responsibility for action. Third, we consider the relevance of consciousness for the acquisition of moral knowledge. (shrink)
Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way (...) of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The out- side world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the gov- erning laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Sev- eral lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual “filling in,” visual stability despite eye movements, change blindness, sensory substitution, and color perception. (shrink)
How can we disentangle the neural basis of phenomenal consciousness from the neural machinery of the cognitive access that underlies reports of phenomenal consciousness? We can see the problem in stark form if we ask how we could tell whether representations inside a Fodorian module are phenomenally conscious. The methodology would seem straightforward: find the neural natural kinds that are the basis of phenomenal consciousness in clear cases when subjects are completely confident and we have no reason (...) to doubt their authority, and look to see whether those neural natural kinds exist within Fodorian modules. But a puzzle arises: do we include the machinery underlying reportability within the neural natural kinds of the clear cases? If the answer is ‘Yes’, then there can be no phenomenally conscious representations in Fodorian modules. But how can we know if the answer is ‘Yes’? The suggested methodology requires an answer to the question it was supposed to answer! The paper argues for an abstract solution to the problem and exhibits a source of empirical data that is relevant, data that show that in a certain sense phenomenal consciousness overflows cognitive accessibility. The paper argues that we can find a neural realizer of this overflow if assume that the neural basis of phenomenal consciousness does not include the neural basis of cognitive accessibility and that this assumption is justified by the explanations it allows. (shrink)
The position advanced in this paper is that the bedrock of emotional feelings is contained within the evolved emotional action apparatus of mammalian brains. This dual-aspect monism approach to brain–mind functions, which asserts that emotional feelings may reflect the neurodynamics of brain systems that generate instinctual emotional behaviors, saves us from various conceptual conundrums. In coarse form, primary process affective consciousness seems to be fundamentally an unconditional “gift of nature” rather than an acquired skill, even though those systems facilitate (...) skill acquisition via various felt reinforcements. Affective consciousness, being a comparatively intrinsic function of the brain, shared homologously by all mammalian species, should be the easiest variant of consciousness to study in animals. This is not to deny that some secondary processes cannot be evaluated in animals with sufficiently clever behavioral learning procedures, as with place-preference procedures and the analysis of changes in learned behaviors after one has induced re-valuation of incentives. Rather, the claim is that a direct neuroscientific study of primary process emotional/affective states is best achieved through the study of the intrinsic , albeit experientially refined, emotional action tendencies of other animals. In this view, core emotional feelings may reflect the neurodynamic attractor landscapes of a variety of extended trans-diencephalic, limbic emotional action systems—including SEEKING, FEAR, RAGE, LUST, CARE, PANIC, and PLAY. Through a study of these brain systems, the neural infrastructure of human and animal affective consciousness may be revealed. Emotional feelings are instantiated in large-scale neurodynamics that can be most effectively monitored via the ethological analysis of emotional action tendencies and the accompanying brain neurochemical/electrical changes. The intrinsic coherence of such emotional responses is demonstrated by the fact that they can be provoked by electrical and chemical stimulation of specific brain zones—effects that are affectively laden. For substantive progress in this emerging research arena, animal brain researchers need to discuss affective brain functions more openly. Secondary awareness processes, because of their more conditional, contextually situated nature, are more difficult to understand in any neuroscientific detail. In other words, the information-processing brain functions, critical for cognitive consciousness, are harder to study in other animals than the more homologous emotional/motivational affective state functions of the brain. (shrink)
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. (...) This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. (shrink)
Ontological emergentists about consciousness maintain that phenomenal properties are ontologically fundamental properties that are nonetheless non-basic: they emerge from reality only once the ultimate material constituents of reality (the “UPCs”) are suitable arranged. Ontological emergentism has been challenged on the grounds that it is insufficiently explanatory. In this essay, I develop the version of ontological emergentism I take to be the most explanatorily promising—the causal theory of ontological emergence—in light of four challenges: The Collaboration Problem (how do UPCs jointly (...) manifest their collective consciousness-generating power?); The Threshold Problem: (under what circumstances do UPCs jointly manifest their collective consciousness-generating power?); The Subject Problem (which object is the bearer of emergent phenomenal states?); and The Specificity Problem (what determines which specific phenomenal state is generated?) In response to these challenges, I arrive at the following picture of ontological emergence. When UPCs that are parts of a suitably complex sensorimotor system become entangled, they jointly manifest a subject-forming power (where subjects are deeply unified composites of the UPCs responsible for generating them). The emergent subjects thereby formed exhibit a novel causal power: the power to generate phenomenal states, which they themselves instantiate: states that “interpret” what is going on in the brain. (shrink)
No mental phenomenon is more central than consciousness to an adequate understanding of the mind. Nor does any mental phenomenon seem more stubbornly to resist theoretical treatment. Consciousness is so basic to the way we think about the mind that it can be tempting to suppose that no mental states exist that are not conscious states. Indeed, it may even seem mysterious what sort of thing a mental state might be if it is not a conscious state. On (...) this way of looking at things, if any mental states do lack consciousness, they are exceptional cases that call for special explanation or qualification. Perhaps dispositional or cognitive states exist that are not conscious, but nonetheless count as mental states. (shrink)
What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, David Chalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the "hard problem" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of (...) class='Hi'>consciousness. He replies to many critics of The Conscious Mind, and then develops a positive theory in new directions. The book includes original accounts of how we think and know about consciousness, of the unity of consciousness, and of how consciousness relates to the external world. Along the way, Chalmers develops many provocative ideas: the " consciousness meter", the Garden of Eden as a model of perceptual experience, and The Matrix as a guide to the deepest philosophical problems about consciousness and the external world. This book will be required reading for anyone interested in the problems of mind, brain, consciousness, and reality. (shrink)
Newcomers to the philosophy of mind are sometimes resistant to the idea that pain is a mental state. If asked to defend their view, they might say something like this: pain is a physical state, it is a state of the body. A pain in one’s leg feels to be in the leg, not ‘in the mind’. After all, sometimes people distinguish pain which is ‘all in the mind’ from a genuine pain, sometimes because the second is ‘physical’ while the (...) first is not. And we also occasionally distinguish mental pain (which is normally understood as some kind of emotional distress) from the ‘physical pain’ one feels in one’s body. So what can be meant by saying that pain is a mental state? Of course, it only takes a little reflection shows that this naive view is mistaken. Pain is a state of consciousness, or an event in consciousness, and whether or not all states of mind are conscious, it is indisputable that only minds, or states of mind, are conscious.2 But does the naive view tell us anything about the concept of pain, or the concept of mind? I think it does. In this paper, I shall give reasons for thinking that consciousness is a form of intentionality, the mind’s ‘direction upon its objects’. I shall claim that the consciousness involved in bodily sensations like pain is constituted by the mind’s direction upon the part or region of the body where the sensation feels to be. Given this, it is less surprising that the naive view of pain says what it does: the apparent ‘physicality’ of pain is a consequence of confusing the object of the intentional state—the part of the body in which the pain is felt—with the state of being in pain. (shrink)
What would Gottfried Wilhelm Leibniz have said about today’s problem of consciousness? Some philosophers claim that Leibniz was one of the first to argue that there is an ‘explanatory gap’ between our knowledge of matter and our knowledge of consciousness, and that he thought this posed a problem for materialism (see for example Churchland 1995: 191-2; Kriegel 2015: 49; Seager 1991; Searle 1983: 267-8). This is supposed to be the point of the famous passage in the Monadology (1714), (...) in which Leibniz argues that perception is ‘inexplicable in terms of mechanical reasons’... (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would (...) have important implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
The standard behavioral index for human consciousness is the ability to report events with accuracy. While this method is routinely used for scientific and medical applications in humans, it is not easy to generalize to other species. Brain evidence may lend itself more easily to comparative testing. Human consciousness involves widespread, relatively fast low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. These features have also been found in other mammals, which suggests (...) that consciousness is a major biological adaptation in mammals. We suggest more than a dozen additional properties of human consciousness that may be used to test comparative predictions. Such homologies are necessarily more remote in non-mammals, which do not share the thalamocortical complex. However, as we learn more we may be able to make “deeper” predictions that apply to some birds, reptiles, large-brained invertebrates, and perhaps other species. (shrink)
What are the folk-conceptual connections between free will and consciousness? In this paper I present results which indicate that consciousness plays central roles in folk conceptions of free will. When conscious states cause behavior, people tend to judge that the agent acted freely. And when unconscious states cause behavior, people tend to judge that the agent did not act freely. Further, these studies contribute to recent experimental work on folk philosophical affiliation, which analyzes folk responses to determine whether (...) folk views are consistent with the view that free will and determinism are incompatible (incompatibilism) or with the opposite view (compatibilism). Conscious causation of behavior tends to elicit pro-free will judgments, even when the causation takes place deterministically. Thus, when controlling for consciousness, many folk seem to be compatibilists. However, participants who disagree with the deterministic or cognitive scientific descriptions given of human behavior tend to give incompatibilist responses. (shrink)
Philosophers traditionally recognize two main features of mental states: intentionality and phenomenal consciousness. To a first approximation, intentionality is the aboutness of mental states, and phenomenal consciousness is the felt, experiential, qualitative, or "what it's like" aspect of mental states. In the past few decades, these features have been widely assumed to be distinct and independent. But several philosophers have recently challenged this assumption, arguing that intentionality and consciousness are importantly related. This article overviews the key views (...) on the relationship between consciousness and intentionality and describes our favored view, which is a version of the phenomenal intentionality theory, roughly the view that the most fundamental kind of intentionality arises from phenomenal consciousness. (shrink)
According to what we will call subjectivity theories of consciousness, there is a constitutive connection between phenomenal consciousness and subjectivity: there is something it is like for a subject to have mental state M only if M is characterized by a certain mine-ness or for-me-ness. Such theories appear to face certain psychopathological counterexamples: patients appear to report conscious experiences that lack this subjective element. A subsidiary goal of this chapter is to articulate with greater precision both subjectivity theories (...) and the psychopathological challenge they face. The chapter’s central goal is to present two new approaches to defending subjectivity theories in the face of this challenge. What distinguishes these two approaches is that they go to great lengths to interpret patients’ reports at face value – greater length, at any rate, than more widespread approaches in the extant literature. (shrink)
Having laid the groundwork in his critically acclaimed books Neural Darwinism (Basic Books, 1987) and Topobiology (Basic Books, 1988), Nobel laureate Gerald M. Edelman now proposes a comprehensive theory of consciousness in The Remembered ...
Conscious experience is one of the most difficult and thorny problems in psychological science. Its study has been neglected for many years, either because it was thought to be too difficult, or because the relevant evidence was thought to be poor. Bernard Baars suggests a way to specify empirical constraints on a theory of consciousness by contrasting well-established conscious phenomena - such as stimulus representations known to be attended, perceptual, and informative - with closely comparable unconscious ones - such (...) as stimulus representations known to be preperceptual, unattended, or habituated. Adducing data to show that consciousness is associated with a kind of global workplace in the nervous system, and that several brain structures are known to behave in accordance with his theory, Baars helps to clarify many difficult problems. (shrink)
Cognitive science typically postulates unconscious mental phenomena, computational or otherwise, to explain cognitive capacities. The mental phenomena in question are supposed to be inaccessible in principle to consciousness. I try to show that this is a mistake, because all unconscious intentionality must be accessible in principle to consciousness; we have no notion of intrinsic intentionality except in terms of its accessibility to consciousness. I call this claim the The argument for it proceeds in six steps. The essential (...) point is that intrinsic intentionality has aspectual shape: Our mental representations represent the world under specific aspects, and these aspectual features are essential to a mental state's being the state that it is. (shrink)
This article makes five main points. Individual human consciousness is formed in the dynamic interrelation of self and other, and therefore is inherently intersubjective. The concrete encounter of self and other fundamentally involves empathy, under- stood as a unique and irreducible kind of intentionality. Empathy is the precondi- tion of the science of consciousness. Human empathy.
ABSTRCT: In this commentary, I criticize Metzinger's interdisciplinary approach to fixing the explanandum of a theory of consciousness and I offer a commonsense alternative in its place. I then re-evaluate Metzinger's multi-faceted working concept of consciousness, and argue for a shift away from the notion of "global availability" and towards the notio ns of "perspectivalness" and "transparency." This serves to highlight the role of Metzinger's "phenomenal model of the intentionality relation" (PMIR) in explaining consciousness, and it helps (...) to locate Metzinger's theory in relation to other naturalistic theories of. (shrink)
Consciousness is scientifically challenging to study because of its subjective aspect. This leads researchers to rely on report-based experimental paradigms in order to discover neural correlates of consciousness (NCCs). I argue that the reliance on reports has biased the search for NCCs, thus creating what I call 'methodological artefacts'. This paper has three main goals: first, describe the measurement problem in consciousness science and argue that this problem led to the emergence of methodological artefacts. Second, provide a (...) critical assessment of the NCCs put forward by the global neuronal workspace theory. Third, provide the means of dissociating genuine NCCs from methodological artefacts. (shrink)
The human brain consists of approximately 100 billion electrically active neurones that generate an endogenous electromagnetic field, whose role in neuronal computing has not been fully examined. The source, magnitude and likely influence of the brain's endogenous em field are here considered. An estimate of the strength and magnitude of the brain's em field is gained from theoretical considerations, brain scanning and microelectrode data. An estimate of the likely influence of the brain's em field is gained from theoretical principles and (...) considerations of the experimental effects of external em fields on neurone firing both in vitro and in vivo. Synchronous firing of distributed neurones phase-locks induced em field fluctuations to increase their magnitude and influence. Synchronous firing has previously been demonstrated to correlate with awareness and perception, indicating that perturbations to the brain's em field also correlate with awareness. The brain's em field represents an integrated electromagnetic field representation of distributed neuronal information and has dynamics that closely map to those expected for a correlate of consciousness. I propose that the brain's em information field is the physical substrate of conscious awareness - the cemi field - and make a number of predictions that follow from this proposal. Experimental evidence pertinent to these predictions is examined and shown to be entirely consistent with the cemi field theory. This theory provides solutions to many of the intractable problems of consciousness - such as the binding problem - and provides new insights into the role of consciousness, the meaning of free will and the nature of qualia. It thus places consciousness within a secure physical framework and provides a route towards constructing an artificial consciousness. (shrink)
Contemporary theories of consciousness are based on widely different concepts of its nature, most or all of which probably embody aspects of the truth about it. Starting with a concept of consciousness indicated by the phrase “the feeling of what happens” (the title of a book by Antonio Damásio), we attempt to build a framework capable of supporting and resolving divergent views. We picture consciousness in terms of Reality experiencing itself from the perspective of cognitive agents. Each (...) conscious experience is regarded as composed of momentary feeling events that are combined by recognition and evaluation into extended conscious episodes that bind cognitive contents with a wide range of apparent durations (0.1 secs to 2 or more secs, for us humans, depending on circumstances and context). Three necessary conditions for the existence of consciousness are identified: a) a ground of Reality, envisaged as an universal field of potentiality encompassing all possible manifestations, whether material or 'mental'; b) a transitional zone, leading to; c) a manifest world with its fundamental divisions into material, 'informational' and quale-endowed aspects. We explore ideas about the nature of these necessary conditions, how they may relate to one another and whether our suggestions have empirical implications. (shrink)
Most early studies of consciousness have focused on human subjects. This is understandable, given that humans are capable of reporting accurately the events they experience through language or by way of other kinds of voluntary response. As researchers turn their attention to other animals, “accurate report” methodologies become increasingly difficult to apply. Alternative strategies for amassing evidence for consciousness in non-human species include searching for evolutionary homologies in anatomical substrates and measurement of physiological correlates of conscious states. In (...) addition, creative means must be developed for eliciting behaviors consistent with consciousness. In this paper, we explore whether necessary conditions for consciousness can be established for species as disparate as birds and cephalopods. We conclude that a strong case can be made for avian species and that the case for cephalopods remains open. Nonetheless, a consistent effort should yield new means for interpreting animal behavior. (shrink)
The centerpiece of the scientific study of consciousness is the search for the neural correlates of consciousness. Yet science is typically interested not only in discovering correlations, but also – and more deeply – in explaining them. When faced with a correlation between two phenomena in nature, we typically want to know why they correlate. The purpose of this chapter is twofold. The first half attempts to lay out the various possible explanations of the correlation between consciousness (...) and its neural correlate – to provide a sort of “menu” of options from which we probably would ultimately have to choose. The second half raises considerations suggesting that, under certain reasonable assumptions, the choice among these various options may be in principle underdetermined by the relevant scientific evidence. (shrink)
Subjectivity theories of consciousness take self-reference, somehow construed, as essential to having conscious experience. These theories differ with respect to how many levels they posit and to whether self-reference is conscious or not. But all treat self-referencing as a process that transpires at the personal level, rather than at the subpersonal level, the level of mechanism. -/- Working with conceptual resources afforded by pre-existing theories of consciousness that take self-reference to be essential, several attempts have been made to (...) explain seemingly anomalous cases, especially instances of alien experience. These experiences are distinctive precisely because self-referencing is explicitly denied by the only person able to report them: those who experience them deny that certain actions, mental states, or body parts belong to self. The relevant actions, mental states, or body parts are sometimes attributed to someone or something other than self, and sometimes they are just described as not belonging to self. But all are referred away from self. -/- The cases under discussion here include somatoparaphrenia, schizophrenia, depersonalization, anarchic hand syndrome, and utilization behavior; the theories employed, Higher-Order Thought, Wide Intrinsicality, and Self-Representational. Below I argue that each of these attempts at explaining or explaining away the anomalies fails. Along the way, since each of these theories seeks at least compatibility with science, I sketch experimental approaches that could be used to adduce support for my position, or indeed for the positions of theorists with whom I disagree. -/- In a concluding section I first identify two presuppositions shared by all of the theorists considered here, and argue that both are either erroneous or misleading. Second, I call attention to divergent paths adopted when attempting to explain alienation experiences: some theorists choose to add a mental ingredient, while others prefer to subtract one. I argue that alienation from experience, action, or body parts could result from either addition or subtraction, and that the two can be incorporated within a comprehensive explanatory framework. Finally, I suggest that this comprehensive framework would require self-referencing of a sort, but self-referencing that occurs solely on the level of mechanism, or the subpersonal level. In adumbrating some features of this “subpersonal self,” I suggest that there might be one respect in which it is prior to conscious experience. (shrink)
In the past decade, the notion of a neural correlate of consciousness (or NCC) has become a focal point for scientific research on consciousness (Metzinger, 2000a). A growing number of investigators believe that the first step toward a science of consciousness is to discover the neural correlates of consciousness. Indeed, Francis Crick has gone so far as to proclaim that ‘we … need to discover the neural correlates of consciousness.… For this task the primate visual (...) system seems especially attractive.… No longer need one spend time attempting … to endure the tedium of philosophers perpetually disagreeing with each other. Con- sciousness is now largely a scientific problem’ (Crick, 1996, p. 486).2 Yet the question of what it means to be a neural correlate of consciousness is actually far from straightforward, for it involves fundamental empirical, methodological, and _philosophical _issues about the nature of consciousness and its relationship to the brain. Even if one assumes, as we do, that states of consciousness causally depend on states of the brain, one can nevertheless wonder in what sense there is, or could be, such a thing as a neural correlate of consciousness. (shrink)
Relative blindsight is said to occur when different levels of subjective awareness are obtained at equality of objective performance. Using metacontrast masking, Lau and Passingham reported relative blindsight in normal observers at the shorter of two stimulus-onset asynchronies between target and mask. Experiment 1 replicated the critical asymmetry in subjective awareness at equality of objective performance. We argue that this asymmetry cannot be regarded as evidence for relative blindsight because the observers’ responses were based on different attributes of the stimuli (...) at the two SOAs. With an invariant criterion content , there was no asymmetry in subjective awareness across the two SOAs even though objective performance was the same. Experiment 3 examined the effect of criterion level on estimates of relative blindsight. Collectively, the present results question whether metacontrast masking is a suitable paradigm for establishing relative blindsight. Implications for theories of consciousness are discussed. (shrink)
Introspection stands at the interface between two major currents in philosophy and related areas of science: on the one hand, there are metaphysical and scientific questions about the nature of consciousness; and on the other hand, there are normative and epistemological questions about the nature of self-knowledge. Introspection seems tied up with consciousness, to the point that some writers define consciousness in terms of introspection; and it is also tied up with self-knowledge, since introspection is the distinctive (...) way in which we come to know about ourselves and, in particular, about our own conscious mental states, processes and events. Each of these topics – consciousness and self-knowledge – has generated an extensive philosophical literature in its own right. But despite some notable exceptions, the relationship between consciousness and self-knowledge has been curiously neglected and remains poorly understood. Indeed, until quite recently, the sub-fields of philosophy of mind and epistemology were pursued largely in isolation from one another. Recent philosophy of mind has been dominated by metaphysical questions about the nature of consciousness and its place in the physical world, while much less attention has been devoted to questions about the epistemic role of consciousness as a source of knowledge and justified belief. Similarly, recent epistemology has been organized around questions about the nature of knowledge and justified belief, but much of this discussion has developed independently of recent work in philosophy of mind about the nature of consciousness. The impetus behind this volume is to bring together these two lines of research by exploring the nature of introspection, which lies at the intersection between consciousness and self-knowledge. This volume collects fourteen new essays and one reprinted essay in which the interplay between concerns in epistemology and the philosophy of mind is a major focus. (shrink)
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. 1 The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, (...) and include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Con- sciousness had to be omitted, but here at last it is. Each paper’s review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors! (shrink)
In recent decades, with advances in the behavioral, cognitive, and neurosciences, the idea that patterns of human behavior may ultimately be due to factors beyond our conscious control has increasingly gained traction and renewed interest in the age-old problem of free will. To properly assess what, if anything, these empirical advances can tell us about free will and moral responsibility, we first need to get clear on the following questions: Is consciousness necessary for free will? If so, what role (...) or function must it play? Are agents morally responsible for actions and behaviors that are carried out automatically or without conscious control or guidance? Are they morally responsible for actions, judgments, and attitudes that are the result of implicit biases or situational features of their surroundings of which they are unaware? What about the actions of somnambulists or cases of extreme sleepwalking where consciousness is largely absent? Clarifying the relationship between consciousness and free will is imperative if we want to evaluate the various arguments for and against free will. For example, do compatibilist reasons- responsive and deep self accounts require consciousness? If so, are they threatened by recent developments in the behavior, cognitive, and neurosciences? What about libertarian accounts of free will? What powers, if any, do they impart to consciousness and are they consistent with our best scientific theories about the world? In this survey piece, I will outline and assess several distinct views on the relationship between consciousness and free will. (shrink)
Non-sensory experiences represent almost all context information in consciousness. They condition most aspects of conscious cognition including voluntary retrieval, perception, monitoring, problem solving, emotion, evaluation, meaning recognition. Many peculiar aspects of non-sensory qualia (e.g., they resist being 'grasped' by an act of attention) are explained as adaptations shaped by the cognitive functions they serve. The most important nonsensory experience is coherence or "rightness." Rightness represents degrees of context fit among contents in consciousness, and between conscious and non-conscious processes. (...) Rightness (not familiarity) is the feeling-of-knowing in implicit cognition. The experience of rightness suggests that neural mechanisms "compute" signals indicating the global dynamics of network integration. (shrink)
Abstract: Nietzsche’s famously wrote that “consciousness is a surface” (EH, Why I am so clever, 9: 97). The aim of this paper is to make sense of this quite puzzling contention—Superficiality, for short. In doing this, I shall focus on two further claims—both to be found in Gay Science 354—which I take to substantiate Nietzsche’s endorsement of Superficiality. The first claim is that consciousness is superfluous—which I call the “superfluousness claim” (SC). The second claim is that consciousness (...) is the source of some deep falsification—which I call the “falsification claim” (FC). I shall start by considering Nietzsche’s notion of consciousness. Here, I shall argue that the kind of consciousness he is concerned with is in fact self-consciousness and that he put forward a higher-order theory of it. Then, I shall address the two claims. Regarding (FC), my proposal will be that, according to Nietzsche, the content of (self-)conscious mental states is falsified in virtue of its being articulated propositionally. Regarding (SC), I shall claim that it is best read as a weak version of epiphenomenalism about conscious causation. In addressing both points, I shall discuss in particular the influential reading of Nietzsche’s theory of consciousness offered by Katsafanas (2005). (shrink)
The search for neural correlates of consciousness (or NCCs) is arguably the cornerstone in the recent resurgence of the science of consciousness. The search poses many difficult empirical problems, but it seems to be tractable in principle, and some ingenious studies in recent years have led to considerable progress. A number of proposals have been put forward concerning the nature and location of neural correlates of consciousness. A few of these include.
Can we make progress exploring consciousness? Or is it forever beyond human reach? In science we never know the ultimate outcome of the journey. We can only take whatever steps our current knowledge affords. This paper explores today's evidence from the viewpoint of Global Workspace theory. First, we ask what kind of evidence has the most direct bearing on the question. The answer given here is ‘contrastive analysis’ -- a set of paired comparisons between similar conscious and unconscious processes. (...) This body of evidence is already quite large, and constrains any possible theory . Because it involves both conscious and unconscious events, it deals directly with our own subjective experience, as anyone can tell by trying the demonstrations in this article. (shrink)
The ‘Toward a Science of Consciousness’ conference – which has now become ‘The Science of Consciousness’ conference – recently (June 5-10, 2017) took place instead at the receptive venue of the Hyatt Regency in La Jolla, California. It was well-planned and organized, which is extraordinary considering that it had to be organized all over again within a month or two when the original Shanghai location was cancelled. Things ran smoothly at La Jolla and it was well attended for (...) an odd-year, non-Tucson setting. The Director of the Center for Consciousness Studies, Dr. Stuart Hameroff, and his able Assistant Director and logistics manager, Abi Behar Montefiore, deserve full credit for carrying off this last minute transfer, as do many others who worked in supporting capacities. (shrink)
We first describe how the concept of “fringe consciousness” can characterise gradations of consciousness between the extremes of implicit and explicit learning. We then show that the NEO-PI-R personality measure of openness to feelings, chosen to reflect the ability to introspect on fringe feelings, influences both learning and awareness in the serial reaction time task under conditions that have previously been associated with implicit learning . This provides empirical evidence for the proposed phenomenology and functional role of fringe (...)consciousness in so-called implicit learning paradigms . Introducing an individual difference variable also helped to identify possible limitations of the exclusion task as a measure of conscious sequence knowledge. Further exploration of individual differences in fringe awareness may help to avoid polarity in the implicit learning debate, and to resolve apparent inconsistencies between previous SRT studies. (shrink)
Abstract. This introductory chapter was written in 1996, for a new book of review articles on the emerging science of consciousness, specifically aimed at undergraduate and postgraduate students by experts in the relevant fields. Following on a brief history, the chapter moves on to definitions of consciousness and background philosophical issues, and then introduces a unified, non-reductionist scientific approach. It then summarises major issues for studies of consciousness in cognitive psychology, including studies of attention, memory, the extent (...) of preconscious analysis and the relation of consciousness more generally to human information processing. It then turns to the neuropsychology of consciousness, starting with some apparent neural requirements for the transition from preconscious to conscious states, various clinical dissociations of consciousness, conditions for integration (or binding) and, finally, clinical applications, including different forms of mind/body interaction and evidence for the causal efficacy of mental states. The chapter concludes that while some of the ancient problems of consciousness remain unsolved, its study has become the subject of a rapidly developing science. (shrink)
The issue of the biological origin of consciousness is linked to that of its function. One source of evidence in this regard is the contrast between the types of information that are and are not included within its compass. Consciousness presents us with a stable arena for our actions—the world—but excludes awareness of the multiple sensory and sensorimotor transformations through which the image of that world is extracted from the confounding influence of self-produced motion of multiple receptor arrays (...) mounted on multijointed and swivelling body parts. Likewise excluded are the complex orchestrations of thousands of muscle movements routinely involved in the pursuit of our goals. This suggests that consciousness arose as a solution to problems in the logistics of decision making in mobile animals with centralized brains, and has correspondingly ancient roots. (shrink)
Ned BlockÕs inﬂuential distinction between phenomenal and access consciousness has become a staple of current discussions of consciousness. It is not often noted, however, that his distinction tacitly embodies unargued theoretical assumptions that favor some theoretical treatments at the expense of others. This is equally so for his less widely discussed distinction between phenomenal consciousness and what he calls reﬂexive consciousness. I argue that the distinction between phenomenal and access consciousness, as Block draws it, is (...) untenable. Though mental states that have qualitative character plainly diﬀer from those with no mental qualities, a mental stateÕs being conscious is the same property for both kinds of mental state. For one thing, as Block describes access consciousness, that notion does not pick out any property that we intuitively count as a mental stateÕs being conscious. But the deeper problem is that BlockÕs notion of phenomenal consciousness, or phenomenality, is ambiguous as between two very diﬀerent mental properties. The failure to distinguish these results in the begging of important theoretical questions. Once the two kinds of phenomenality have been distinguished, the way is clear to explain qualitative consciousness by appeal to a model such as the higher-order-thought hypothesis. Ó 2002 Elsevier Science . All rights reserved. (shrink)
One of the most compelling questions still unanswered in neuroscience is how consciousness arises. In this article, we examine visual processing, the parietal lobe, and contralateral neglect syndrome as a window into consciousness and how the brain functions as the mind and we introduce a mechanism for the processing of visual information and its role in consciousness. We propose that consciousness arises from integration of information from throughout the body and brain by the thalamus and that (...) the thalamus reimages visual and other sensory information from throughout the cortex in a default three-dimensional space in the mind. We further suggest that the thalamus generates a dynamic default three-dimensional space by integrating processed information from corticothalamic feedback loops, creating an infrastructure that may form the basis of our consciousness. Further experimental evidence is needed to examine and support this hypothesis, the role of the thalamus, and to further elucidate the mechanism of consciousness. (shrink)
One major problem many hypotheses regarding the neural correlate of consciousness, face is what we might call “the why question”: why would this particular neural feature, rather than another, correlate with consciousness? The purpose of the present paper is to develop an NCC hypothesis that answers this question. The proposed hypothesis is inspired by the cross-order integration theory of consciousness, according to which consciousness arises from the functional integration of a first-order representation of an external stimulus (...) and a second-order representation of that first-order representation. The proposal comes in two steps. The first step concerns the “general shape” of the NCC and can be directly derived from COI theory. The second step is a concrete hypothesis that can be arrived at by combining the general shape with empirical considerations. (shrink)