Daniel Dennett (1996) has disputed David Chalmers' (1995) assertion that there is a "hardproblem of consciousness" worth solving in the philosophy of mind. In this paper I defend Chalmers against Dennett on this point: I argue that there is a hardproblem of consciousness, that it is distinct in kind from the so-called easy problems, and that it is vital for the sake of honest and productive research in the cognitive sciences to be clear about (...) the difference. But I have my own rebuke for Chalmers on the point of explanation. Chalmers (1995, 1996) proposes to "solve" the hardproblem of consciousness by positing qualia as fundamental features of the universe, alongside such ontological basics as mass and space-time. But this is an inadequate solution: to posit, I will urge, is not to explain. To bolster this view, I borrow from an account of explanation by which it must provide "epistemic satisfaction" to be considered successful (Rowlands, 2001; Campbell, 2009), and show that Chalmers' proposal fails on this account. I conclude that research in the science of consciousness cannot move forward without greater conceptual clarity in the field. (shrink)
According to David Chalmers, the hardproblem of consciousness consists of explaining how and why qualitative experience arises from physical states. Moreover, Chalmers argues that materialist and reductive explanations of mentality are incapable of addressing the hardproblem. In this chapter, I suggest that Chalmers’ hardproblem can be usefully distinguished into a ‘how question’ and ‘why question,’ and I argue that evolutionary biology has the resources to address the question of why qualitative experience (...) arises from brain states. From this perspective, I discuss the different kinds of evolutionary explanations (e.g., adaptationist, exaptationist, spandrel) that can explain the origins of the qualitative aspects of various conscious states. This argument is intended to clarify which parts of Chalmers’ hardproblem are amenable to scientific analysis. (shrink)
In their joint paper entitled The Replication of the HardProblem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hardproblem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hardproblem (We will abbreviate the hardproblem of consciousness as H-consciousness ). The claim is (...) that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding. (shrink)
As I type these words, cognitive systems in my brain engage in visual and auditory information processing. This processing is accompanied by subjective states of consciousness, such as the auditory experience of hearing the tap-tap-tap of the keyboard and the visual experience of seeing the letters appear on the screen. How does the brain's activity generate such experiences? Why should it be accompanied by conscious experience in the first place? This is the hardproblem of consciousness.
I show that the recursive structure of Leibniz's Law requires agents to perform infinitely many operations to psychologically identify the referents of phenomenal and physical concepts, even though the referents of ordinary concepts (e.g. Hesperus and Phosphorus) can be identified in a finite number of steps. The resulting problem resembles the hardproblem of consciousness in the fact that it appears (and indeed is) unsolvable by anyone for whom it arises, and in the fact that it invites (...) dualist and eliminativist responses. Moreover, if this is the hardproblem then we can predict that regardless of the strength of the argument for physicalism, and regardless of physicalism's truth, an ineliminable dissatisfaction is bound to accompany any physicalist theory of consciousness. Accordingly, I suggest that this is the hardproblem of consciousness, and therefore that the hardproblem arises from a recursively degenerate application of Leibniz's Law. (shrink)
The paper begins with a restatement of Chalmers's "hardproblem of consciousness". It is suggested that an interactionist approach is one of the possible solutions of this problem. Some fresh arguments against the identity theory and epiphenomenalism as main rivals of interactionism are developed. One of these arguments has among its colloraries a denial of local supervenience, although not of the causal closure principle. As a result of these considerations a version of "local interactionism" (compatible with causal (...) closure) is proposed. (shrink)
David Chalmers argues that consciousness -- authentic, first-person, conscious consciousness -- cannot be reduced to brain events or to any physical event, and that efforts to find a workable mind-body identity theory are, therefore, doomed in principle. But for Chalmers and non-reductionist in general consciousness consists exclusively, or at least paradigmatically, of phenomenal or qualia-consciousness. This results in a seriously inadequate understanding both of consciousness and of the “hardproblem.” I describe other, higher-order cognitional events which must be (...) conscious if the “hardproblem” is to be solved -- in any sense of ‘solve’ which would make us any the wiser about it -- but whose consciousness is quite different from the qualia and phenomena usually inventoried. Events of this kind are both part of the hardproblem and the means by which we will solve it, if we ever do. -/- . (shrink)
This paper argues that the form of explanation at issue in the hardproblem of consciousness is scientifically irrelevant, despite appearances to the contrary. In particular, it is argued that the 'sense of understanding' that plays a critical role in the form of explanation implicated in the hardproblem provides neither a necessary nor a sufficient condition on satisfactory scientific explanation. Considerations of the actual tools and methods available to scientists are used to make the case (...) against it being a necessary condition, and work by J.D. Trout that exploits psychological research on the hindsight and overconfidence biases is used to show that it is not a sufficient condition. It is argued, however, that certain intellectual and moral concerns give us good reason to still try to meet the hardproblem's explanatory challenge, despite its extrascientific nature. (shrink)
The philosophical mind-body problem, which Chalmers has named the 'HardProblem', concerns the nature of the mind and the body. Physicalist approaches have been explored intensively in recent years but have brought us no consensual solution. Dualistic approaches have also been scrutinised since Descartes, but without consensual success. Mentalism has received little attention, yet it offers an elegantly simple solution to the hardproblem.
This article was written as a commentary on a target article by Peter W. Ross entitled "The Location Problem for Color Subjectivism" [Consciousness and Cognition 10(1), 42-58 (2001)], and is published together with it, and with other commentaries and Ross's reply. If you or your library have the necessary subscription you can get PDF versions of the target article, all the commentaries, and Ross's reply to the commentaries here. However, I do not think that it is by any means (...) essential for you to have read Ross's piece in order to understand this one. Ross defends a view called "color physicalism" or color realism that holds (simplifying somewhat) that colors are real physical properties (in typical cases, spectral reflectances of object surfaces). This is in opposition to what is probably a more widely held "subjectivist" view of color, holding that color qualities really exist only in the mind. In my commentary I suggest that a realist view of qualitative properties, such as Ross's, together with a direct, active view of perception, and a concept of "extended mind" (Clark & Chalmers, 1998) may provide the materials for a real solution to the notorious hardproblem of consciousness. I sketch this solution in outline. - N.J.T.T. (shrink)
Although far from unanimous, there seems to be a general consensus that neither mind nor brain can be reduced without remainder to the other. This essay argues that indeed both mind and brain need to be included in a nonreductionistic way in any genuinely integral theory of consciousness. In order to facilitate such integration, this essay presents the results of an extensive cross-cultural literature search on the "mind" side of the equation, suggesting that the mental phenomena that need to be (...) considered in any integral theory include developmental levels or waves of consciousness, developmental lines or streams of consciousness, states of consciousness, and the self (or self-system). A "master template" of these various phenomena, culled from over one-hundred psychological systems East and West, is presented. It is suggested that this master template represents a general summary of the "mind" side of the brain-mind integration. The essay concludes with reflections on the "hardproblem," or how the mind-side can be integrated with the brain-side to generate a more integral theory of consciousness. (shrink)
In his book The Conscious Mind David Chalmers introduced a by now familiar distinction between the hardproblem and the easy problems of consciousness. The easy problems are those concerned with the question of how the mind can process information, react to environmental stimuli, and exhibit such capacities as discrimination, categorization, and introspection (Chalmers, 1996, 4, 1995, 200). All of these abilities are impressive, but they are, according to Chalmers, not metaphysically baffling, since they can all be tackled (...) by means of the standard repertoire of cognitive science and explained in terms of computational or neural mechanisms. This task might still be difficult, but it is within reach. In contrast, the hardproblem—also known as the problem of consciousness (Chalmers, 1995, 201)—is the problem of explaining why mental states have phenomenal or experiential qualities. Why is it like something to ‘taste coffee’, to ‘touch an ice cube’, to ‘look at a sunset’ etc.? Why does it feel the way it does? Why does it at all feel like anything? Chalmers’s distinction confronts us with a version of the so-called ‘explanatory gap’. On the one hand, we have certain cognitive functions, which can apparently be explained reductively, and on the other hand, we have a number of experiential qualities, which seem to resist this reductive explanation. We can establish that a certain function is accompanied by a certain experience, but we have no idea why that happens, and regardless of how closely we scrutinize the neural mechanisms we don’t seem to be getting any closer at an answer. In his book, Chalmers also distinguished two concepts of mind: a phenomenal concept and a psychological concept. The first captures the conscious aspect of mind: Mind is understood in terms of conscious experience. The second concept understands mind in functional terms as the causal or explanatory basis for behavior.. (shrink)
The constructivist notion that features are purely functional is incompatible with the classical computational metaphor of mind. I suggest that the discontent expressed by Schyns, Goldstone and Thibaut about fixed-features theories of categorization reflects the growing impact of connectionism, and show how their perspective is similar to recent research on implicit learning, consciousness, and development. A hardproblem remains, however: How to bridge the gap between subsymbolic and symbolic cognition.
Owen Flanagan's The Really HardProblem provides a rich source of reflection on the question of meaning and ethics within the context of philosophical naturalism. I affirm the title's claim that the quest to find meaning in a purely physical universe is indeed a hardproblem by addressing three issues: Flanagan's claim that there can be a scientific/empirical theory of ethics (eudaimonics), that ethics requires moral glue, and whether, in the end, Flanagan solves the hard (...)problem. I suggest that he does not, although he provides much that is of importance and useful for further reflection along the way. (shrink)
Philosophical (p-) zombies are constructs that possess all of the behavioral features and responses of a sentient human being, yet are not conscious. P-zombies are intimately linked to the hardproblem of consciousness and have been invoked as arguments against physicalist approaches. But what if we were to invert the characteristics of p-zombies? Such an inverse (i-) zombie would possess all of the behavioral features and responses of an insensate being yet would nonetheless be conscious. While p-zombies are (...) logically possible but naturally improbable, an approximation of i-zombies actually exists: individuals experiencing what is referred to as “anesthesia awareness.” Patients under general anesthesia may be intubated (preventing speech), paralyzed (preventing movement), and narcotized (minimizing response to nociceptive stimuli). Thus, they appear—and typically are—unconscious. In 1-2 cases/1000, however, patients may be aware of intraoperative events, sometimes without any objective indices. Furthermore, a much higher percentage of patients (22% in a recent study) may have the subjective experience of dreaming during general anesthesia. P-zombies confront us with the hardproblem of consciousness—how do we explain the presence of qualia? I-zombies present a more practical problem—how do we detect the presence of qualia? The current investigation compares p-zombies to i-zombies and explores the “hardproblem” of unconsciousness with a focus on anesthesia awareness. (shrink)
I take the `hardproblem' of consciousness to be to understand the relation between our subjective experience and the brain processes that cause it; that is, to reconcile our everyday feeling of consciousness with the scienti c worldview (MacLennan, 1995). This problem is hard because consciousness has unique epistemological characteristics, which must be accommodated by any attempted solution. I will summarize these characteristics; more detail can be found in Searle (1992, chs. 4, 5) and Chalmers (1995, (...) 1996), whose positions, if I have understood them correctly, are consistent with mine. 1 First, science is a public enterprise; it attains knowledge that is independent of the individual investigator by limiting itself to public phenomena. Ultimately it is grounded in shared experiences, for example, when we both look at a thermometer and read the same temperature. Traditionally science has accomplished its ends by focusing on the more public, objective aspects of phenomena (e.g. temperature as measured by a thermometer), and by ignoring the more private, subjective aspects (how warm it feels to me). In other words, science has restricted itself to facts about which it is easy to reach agreement among a consensus of trained observers. Although this restriction has aided scienti c progress, it prevents the scienti c study of consciousness, which is essentially private and subjective. 2 Second, scienError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapce's neglect of the subjective is also apparent in its reductive methods.. (shrink)
The paper begins with a restatement of Chalmers’s “hardproblem of consciousness.” It is suggested that an interactionist approach is one of the possible solutions of this problem. Some fresh arguments against the identity theory and epiphenomenalism as main rivals of interactionism are developed. One of these arguments has among its corollaries a denial of local supervenience, although not of the causal closure principle. As a result of these considerations a version of “local interactionism” (compatible with causal (...) closure) is proposed. It is argued that local interactionism may offer a fruitful path for resolving the “hardproblem.”. (shrink)
Keith DeRose’s solution to the skeptical problem is based on his indirect sensitivity account. Sensitivity is not a necessary condition for any kind of knowledge, as direct sensitivity accounts claim, but the insensitivity of our beliefs that the skeptical hypotheses are false explains why we tend to judge that we do not know them. The orthodox objection line against any kind of sensitivity account of knowledge is to present instances of insensitive beliefs that we still judge to constitute knowledge. (...) This objection line offers counter-examples against the claim of direct sensitivity accounts that sensitivity is necessary for any kind of knowledge. These examples raise an easy problem for indirect sensitivity accounts that claim that there is only a tendency to judge that insensitive beliefs do not constitute knowledge, which still applies to our beliefs that the skeptical hypotheses are false. However, a careful analysis reveals that some of our beliefs that the skeptical hypotheses are false are sensitive; nevertheless, we still judge that we do not know them. Therefore, the fact that some of our beliefs that the skeptical hypotheses are false are insensitive cannot explain why we tend to judge that we do not know them. Hence, indirect sensitivity accounts cannot fulfill their purpose of explaining our intuitions about skepticism. This is the hardproblem for indirect sensitivity accounts. (shrink)
I show how a robot with what looks like a hardproblem of consciousness might emerge from the earnest attempt to make a robot that is smart and self-reflective. This problem arises independently of any assumption to the effect that the robot is conscious, but deserves to be thought of as related to the human problem in virtue of the fact that (1) the problem is one the robot encounters when it tries to naturalistically reduce (...) its own subjective states (2) it seems incredibly difficult from the robot’s own naturalist perspective and, most importantly, (3) it invites the robot to engage in the exact same metaphysical responses as humans offer to the problem of consciousness. Despite the fact that it invites the robot to consider extravagant metaphysical solutions, the problem I explore is purely algorithmic. The robot cannot complete its naturalist physicalist reduction as a matter of algorithmic fact, whether or not the naturalist physicalist reduction would be correct as a matter of metaphysical fact. It is hoped that by reproducing the familiar seeming problem in an artificial context, a greater understanding of the human problem of consciousness can be achieved. (shrink)
In this paper, a cognate of the problem of divine foreknowledge is introduced: the problem of the prophet’s foreknowledge. The latter cannot be solved referring to Ockhamism—the doctrine that divine foreknowledge could, at least in principle, be compatible with human freedom because God’s beliefs about future actions are merely soft facts, rather than hard facts about the past. Under the assumption that if Ockhamism can solve the problem of divine foreknowledge then it should also yield a (...) solution to the problem of the prophet’s foreknowledge, it is concluded that Ockhamism fails. (shrink)
firstname.lastname@example.org http://bruce.edmonds.name Abstract. Two kinds of problem are distinguished: the first of finding processes which produce complex outcomes from the interaction of simple parts, and the second of finding which process resulted in an observed complex outcome. The former I call the easy complexity problem and the later the hard complexity problem. It is often assumed that progress with the easy problem will aid process with the hardproblem. However this assumes that the (...) “reverse engineering” problem, of determining the process from the outcomes is feasible. Taking a couple of simple models of reverse engineering, I show that this task is infeasible in the general case. Hence it cannot be assumed that reverse engineering is possible, and hence that most of the time progress on the easy problem will not help with the hardproblem unless there are special properties of a particular set of processes that make it feasible. Assuming that complexity science is not merely an academic “game” and given the analysis of this paper, some criteria for the kinds of paper that have a reasonable chance of being eventually useful for understanding observed complex systems are outlined. Many complexity papers do not fare well against these critieria. (shrink)
Recently some philosophers interested in consciousness have begun to turn their attention to the question of what evolutionary advantages, if any, being conscious might confer on an organism. The issue has been pressed in recent dicussions involving David Chalmers, Todd Moody, Owen Flanagan and Thomas Polger, Daniel Dennett, and others. The purpose of this essay is to consider some of the problems that face anyone who wants to give an evolutionary explanation of consciousness. We begin by framing the problem (...) in the context of some current debates. Then we. (shrink)
The Mind/Body Problem (M/BP) is about causation not correlation. And its solution (if there is one) will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M (...) is not supererogatory; and that's the hard part. (shrink)
Recently, a number of philosophers have turned to folk intuitions about mental states for data about qualia and phenomenal consciousness. In this paper I argue that current research along these lines does not tell us about these subjects. I focus on a series of studies, performed by Justin Sytsma and Edouard Machery, to make my argument. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system – System One – that will generate the same data (...) whether or not we experience phenomenal consciousness. This is a problem for a range of current experimental philosophy research into consciousness or our concept of it. If experimental philosophy is to shed light into phenomenal consciousness, it needs to be better founded in an understanding of how we make judgments. (shrink)
I have assumed that consciousness exists, and that to redefine the problem as that of explaining how certain cognitive and behavioral functions are performed is unacceptable. . . .Like many people (materialists and dualists alike), I find this premise obvious, although I can no more "prove" it than I can prove that I am conscious. . . .there is no denying that such arguments - on either side - ultimately come down to a bedrock of intuition at some point. (...) (Chalmers undated). (shrink)
In the seventh paragraph of the post, you say "This question [which machine, if any or both, is conscious/] seems to be in principle unfalsifiable, and yet genuinely meaningful." (I'm assuming that you mean that any answer to it is unfalsifiable.) My neo-Carnapian intuitions diagnoses the problem right at this point. Forget about attributions of meaningless and all that stuff. Replace it in your statement with more pragmatically-oriented evaluative notions: theoretically fruitless, arbitray without even being helpful for any theoretical, (...) experimental, or practical purpose, and so on. Any answer to the question will be those. Thus the question is not worth pursuing, especially since the thought experiment is science fiction right now. A much more useful way to spend one's time is addressing frutiful questions, like the ones involved in constructing your postulated robots, or investigating neural mechanisms, and so on. So acknowledge the connection between unfalsifiability/verifiability/confirmability and theoretical and practical worthlessness (rather than "meaningless"). Then get on with the theoretically and empirically worthwhile questions. Many of the latter are quiter abstract and "philosophical," anyway (about the scope and limits of various methodologies, existing theories, and so on). Aren't those enough to occupy even the most abstract theorist's attention? Why puzzle about questions whose answers can't be rationally justified? (shrink)
This paper critically examines the forays into metaphysics of The Dual Nature of Technical Artifacts Program (henceforth, DNP). I argue that the work of DNP is a valuable contribution to the epistemology of certain aspects of artifact design and use, but that it fails to advance a persuasive metaphysic. A central problem is that DNP approaches ontology from within a functionalist framework that is mainly concerned with ascriptions and justified beliefs. Thus, the materiality of artifacts emerges only as the (...) external conditions of realizability of function ascription. The work of DNP has a strong programmatic aspect and much of its foray into metaphysics is tentative, so the intent of my argument is partly synthetic: to sum up these issues as they are presented in the literature and highlight some recognized problems. But I also go beyond that, suggesting that these problems are foundational, arising from the very way in which DNP poses the question of artifact metaphysics. Although it sets out to incorporate objective aspects of technology, DNP places a strong focus on the intentional side of the purported matter-mind duality, bracketing off materiality in an irretrievable manner. Thus, some of the advantages of dualism are lost. I claim that DNP is dualistic, not merely based on “duality”, but that its version of dualism does not appropriately account for the material “nature” of artifacts. The paper ends by suggesting some correctives and alternatives to Dual Nature theory. (shrink)
This paper presents a challenge problem for decision-theoretic planners. State-space planners reason globally, building a map of the parts of the world relevant to the planning problem, and then attempt to distill a plan out of the map. A planning problem is constructed that humans find trivial, but no state-space planner can solve. Existing POCL planners cannot solve the problem either, but for a less fundamental reason.
We report two experiments which tested whether cognitive capacities are limited to those functions that are computationally tractable (PTIME-Cognition Hypothesis). In particular, we investigated the semantic processing of reciprocal sentences with generalized quantifiers, i.e., sentences of the form Q dots are directly connected to each other, where Q stands for a generalized quantifier, e.g. all or most. Sentences of this type are notoriously ambiguous and it has been claimed in the semantic literature that the logically strongest reading is preferred (Strongest (...) Meaning Hypothesis). Depending on the quantifier, the verification of their strongest interpretations is computationally intractable whereas the verification of the weaker readings is tractable. We conducted a picture completion experiment and a picture verification experiment to investigate whether comprehenders shift from an intractable reading to a tractable reading which should be dispreferred according to the Strongest Meaning Hypothesis. The results from the picture completion experiment suggest that intractable readings occur in language comprehension. Their verification, however, rapidly exceeds cognitive capacities in case the verification problem cannot be solved using simple heuristics. In particular, we argue that during verification, guessing strategies are used to reduce computational complexity. (shrink)
The subjective features of conscious mental processes--as opposed to their physical causes and effects--cannot be captured by the purified form of thought suitable for dealing with the physical world that underlies appearances." (Nagel, in Dennett, 1991, p. 372).
Realism about cognitive or semantic phenomenology, the view that certain conscious states are intrinsically such as to ground thought or understanding, is increasingly being taken seriously in analytic philosophy. The principle aim of this paper is to argue that it is extremely difficult to be a physicalist about cognitive phenomenology. The general trend in later 20th century/early 21st century philosophy of mind has been to account for the content of thought in terms of facts outside the head of the thinker (...) at the time of thought, e.g. in terms of causal relations between thinker and world, or in terms of the natural purposes for which mental representations have developed. However, on the assumption that consciousness is constitutively realised by what is going on inside the head of a thinker at the time of experience, the content of cognitive phenomenology cannot be accounted for in this way. Furthermore, any internalist account of content is particularly susceptible to Kripkensteinian rule following worries. It seems that if someone knew all the physical facts about what is going on in my head at the time I was having a given experience with cognitive phenomenology, they would not thereby know whether that state had ‘straight’ rather than ‘quus-like’ content, e.g. whether the experience was intrinsically such as the ground the thought that two plus two equals four or intrinsically such as to ground the thought that two quus two equals four. The project of naturalising consciousness is much harder for realists about cognitive phenomenology. (shrink)
In his 1996 paper Neurophenomenology: A methodological remedy for the hardproblem, Francisco Varela called for a union of Husserlian phenomenology and cognitive science. Varela''s call hasn''t gone unanswered, and recent years have seen the development of a small but growing literature intent on exploring the interface between phenomenology and cognitive science. But despite these developments, there is still some obscurity about what exactly neurophenomenology is. What are neurophenomenologists trying to do, and how are they trying to do (...) it? To what extent is neurophenomenology a distinctive and unified research programme? In this paper I attempt to shed some light on these questions. (shrink)
Quantum theory can be regarded as a rationally coherent theory of the interaction of mind and matter and it allows our conscious thoughts to play a causally e cacious and necessary role in brain dynamics It therefore provides a natural basis created by scientists for the science of consciousness As an illustration it is explained how the interaction of brain and consciousness can speed up brain processing and thereby enhance the survival prospects of conscious organisms as compared to similar organisms (...) that lack consciousness As a second illustration it is explained how within the quantum framework the consciously experi enced I directs the actions of a human being It is concluded that contemporary science already has an adequate framework for incorporat ing causally e cacious experiential events into the physical universe in a manner that puts the neural correlates of consciousness into the theory in a well de ned way explains in principle how the e ects of consciousness per se can enhance the survival prospects of organisms that possess it allows this survival e ect to feed into phylogenetic de velopment and explains how the consciously experienced I can direct human behaviour.. (shrink)
Some time ago, in an article for the Journal of Consciousness Studies, David Chalmers challenged his peers to identify the ingredient missing from our current theories of consciousness, the absence of which prevents us from solving the 'hard' problem and forces us to make do with nonreductive theories. Here I respond to this challenge. I suggest that consciousness is a metaphysical problem and as such can be solved only within a global metaphysical theory. Such a theory would (...) look very like the information theory proposed by Chalmers, but with the addition of an extra phenomenon that would allow it to become fundamental. (shrink)
This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe (...) privately owned rights. Reference herein to any speci c commercial products process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authorsexpressed herein do not necessarily state or re ect those of the United States Government or.. (shrink)
Do philosophers and ordinary people conceive of subjective experience in the same way? In this article, we argue that they do not and that the philosophical concept of phenomenal consciousness does not coincide with the folk conception. We first offer experimental support for the hypothesis that philosophers and ordinary people conceive of subjective experience in markedly different ways. We then explore experimentally the folk conception, proposing that for the folk, subjective experience is closely linked to valence. We conclude by considering (...) the implications of our findings for a central issue in the philosophy of mind, the hardproblem of consciousness. (shrink)
Dualists believe that experiences have neither location nor extension, while reductive and ‘non-reductive’ physicalists (biological naturalists) believe that experiences are really in the brain, producing an apparent impasse in current theories of mind. Enactive and reflexive models of perception try to resolve this impasse with a form of “externalism” that challenges the assumption that experiences must either be nowhere or in the brain. However, they are externalist in very different ways. Insofar as they locate experiences anywhere, enactive models locate conscious (...) phenomenology in the dynamic interaction of organisms with the external world, and in some versions, they reduce conscious phenomenology to such interactions, in the hope that this will resolve the hardproblem of consciousness. The reflexive model accepts that experiences of the world result from dynamic organism–environment interactions, but argues that such interactions are preconscious. While the resulting phenomenal world is a consequence of such interactions, it cannot be reduced to them. The reflexive model is externalist in its claim that this external phenomenal world, which we normally think of as the “physical world,” is literally outside the brain. Furthermore, there are no added conscious experiences of the external world inside the brain. In the present paper I present the case for the enactive and reflexive alternatives to more classical views and evaluate their consequences. I argue that, in closing the gap between the phenomenal world and what we normally think of as the physical world, the reflexive model resolves one facet of the hardproblem of consciousness. Conversely, while enactive models have useful things to say about percept formation and representation, they fail to address the hardproblem of consciousness. (shrink)
Daniel Dennett has claimed that if Chalmers' argument for the irreducibility of consciousness were to succeed, an analogous argument would establish the truth of Vitalism. Chalmers denies that there is such an analogy. I argue that the analogy does have merit and that skepticism is called for.