There is not a uniform kind of consciousness common to all conscious mental states: beliefs, emotions, perceptual experiences, pains, moods, verbal thoughts, and so on. Instead, we need a distinction between phenomenal and nonphenomenal consciousness. As if consciousness simpliciter were not mysterious enough, philosophers have recently focused their worries on phenomenal consciousness, the kind that explains or constitutes there being "something it.
Is there an explanatory gap between raw feels and raw material? Some philosophers argue, and many other people believe, that scientific explanations of conscious experience cannot be as satisfying as typical scientific explanations elsewhere, even in our wildest dreams. The underlying philosophical claims are.
Meaning holists hold, roughly, that each representation in a linguistic or mental system depends semantically on every other representation in the system. The main difficulty for holism is the threat it poses to meaning stability--shared meaning between representations in two systems. If meanings are holistically dependent, then semantic differences anywhere seem to balloon into semantic differences everywhere. My positive aim is to show how holism, even at its most extreme, can accommodate and also increase meaning stability. My negative aim is (...) to provide reasons for rejecting various nonholist proposals, at least for systems of mental representations. (shrink)
The frame problem is widely reputed among philosophers to be one of the deepest and most difficult problems of cognitive science. This paper discusses three recent attempts to display this problem: Dennett's problem of ignoring obviously irrelevant knowledge, Haugeland's problem of efficiently keeping track of salient side effects, and Fodor's problem of avoiding the use of kooky concepts. In a negative vein, it is argued that these problems bear nothing but a superficial similarity to the frame problem of AI, so (...) that they do not provide reasons to disparage standard attempts to solve it. More positively, it is argued that these problems are easily solved by slight variations on familiar AI themes. Finally, some discussion is devoted to more difficult problems confronting AI. (shrink)
Can one sense one’s own mind, as one senses nonmental entities in one’s environment and body? According to many contemporary philosophers of mind, the fraudulent commonsense idea of a "mind’s eye" obstructs clearheaded attempts to explain introspection and consciousness. I concede that inner sense cannot directly explain consciousness and introspection in all their forms, but I do think a carefully specified kind of inner sense can account for one very special kind of introspective consciousness. It is special because it is (...) the key to explaining the most puzzling kind of consciousness, phenomenal consciousness—there being "something it is like" to have certain mental states. My aim in this paper is to defend this view against accusations— twenty-two in all!—rather than to argue positively for the view. However, I begin by indicating some of the motivation for the account I defend. (shrink)
Philosophers have used the term ‘consciousness’ for four main topics: knowledge in general, intentionality, introspection and phenomenal experience . This entry discusses the last two uses . Something within one’s mind is ‘introspectively conscious’ just in case one introspects it . Introspection is often thought to deliver one’s primary knowledge of one’s mental life. An experience or other mental entity is ‘phenomenally conscious’ just in case there is ‘something it is like’ for one to have it. The clearest examples are: (...) perceptual experiences, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one’s own actions or perceptions; and streams of thought, as in the experience of thinking ‘in words’ or ‘in images’. Introspection and phenomenality seem independent, or dissociable, although this is controversial. (shrink)
Much research in AI (and cognitive science, more broadly) proceeds on the assumption that there is a difference between being well-informed and being smart. Being well-informed has to do, roughly, with the content of one’s representations--with their truth and the range of subjects they cover. Being smart, on the other hand, has to do with one’s ability to process these representations and with packaging them in a form that allows them to be processed efficiently. The main theoretical concern of artificial (...) intelligence research is to solve "process-and-form" problems: problems with finding processes and representational formats that enable us to understand how a computer could be smart. (shrink)
In the last of his three Royce Lectures called "Self‑Knowledge and 'Inner Sense'", Sydney Shoemaker attempts to reconcile two commitments: (1) that experiences have "qualia", nonrepresentational features that constitute what it is like to have the experiences, and (2) that perceptual experiences seem "diaphanous", yielding to introspection only the way they represent the environment, not intrinsic or otherwise nonrepresentational qualia. On the idea that we internally sense qualiaï¿½that we sense what our experiences are likeï¿½one way to explain apparent diaphanousness is (...) to maintain that these qualia are mistakenly "projected" onto the environment, that in perception we erroneously sense qualia as belonging to environmental objects. Shoemaker rejects both the projection view and the existence of inner sense, and develops an alternative reconciliation. I will describe reasons to doubt his positive proposal, and ways to save projection and inner sense from his criticisms. (shrink)
Besides coming up with something interesting to think about and to say, there is one primary secret to writing a good philosophy paper. But it wouldn’t be much of a secret if I told you, would it? No … wait … it’s..
Much of the philosophical interest of cognitive science stems from its potential relevance to the mind/body problem. The mind/body problem concerns whether both mental and physical phenomena exist, and if so, whether they are distinct. In this chapter I want to portray the classical and connectionist frameworks in cognitive science as potential sources of evidence for or against a particular strategy for solving the mind/body problem. It is not my aim to offer a full assessment of these two frameworks in (...) this capacity. Instead, in this thesis I will deal with three philosophical issues which are (at best) preliminaries to such an assessment: issues about the syntax, the semantics, and the processing of the mental representations countenanced by classical and connectionist models. I will characterize these three issues in more detail at the end of the chapter. (shrink)
Fodor and Pylyshyn (1988) have presented an influential argument to the effect that any viable connectionist account of human cognition must implement a language of thought. Their basic strategy is to argue that connectionist models that do not implement a language of thought fail to account for the systematic relations among propositional attitudes. Several critics of the LOT hypothesis have tried to pinpoint flaws in Fodor and Pylyshyn’s argument (Smolensky 1989; Clark, 1989; Chalmers, 1990; Braddon-Mitchell and Fitzpatrick, 1990). One thing (...) I will try to show is that the argument can be rescued from these criticisms. (Score: LOT 1, Visitors 0.) However, I agree that the argument fails, and I will provide a new account of how it goes wrong. (The score becomes tied.) Of course, the failure of Fodor and Pylyshyn’s argument does not mean that their conclusion is false. Consequently, some connectionist criticisms of Fodor and Pylyshyn’s article take the form of direct counterexamples to their conclusion (Smolensky 1989; van Gelder, 1990; Chalmers, 1990). I will argue, however, that Fodor and Pylyshyn’s conclusion survives confrontation with the alleged counterexamples. Finally, I provide an alternative argument that may succeed where Fodor and Pylyshyn’s fails. (Final Score: LOT 3, Visitors 1.). (shrink)
"Beats the heck out of me! I have some prejudices, but no idea of how to begin to look for a defensible answer. And neither does anyone else." That’s the discussion of conscious experience offered by one of our most brilliant and readable psychologists, in his new 650-page book, modestly titled How the Mind Works. There is no widely accepted scientific program for researching consciousness. Speculation on the subject has been considered safe, careerwise, mainly for moonlighting physicists or physiologists whose (...) Nobel Prizes and similar credentials are long since safely stored away. This essay describes some recent efforts of philosophers of mind who have stepped into the breach. Some argue that the puzzle of consciousness is impossible to solve, and some argue that with certain confusions removed there’s no distinctive puzzle at all. I write from the standpoint of a third group who think the puzzle is difficult but tractable, and who get involved under the pretext that "philosophy is what you do to a problem until it’s clear enough to do science to". (shrink)
On September 11th, an apparent gang of nineteen people set to work, equipped with the little tools you use to unseal the tape on cardboard boxes. About an hour later, they destroyed several giant buildings and four jumbo airplanes, murdering several thousand people from all over the world and from all walks of life.
If the arguments of chapter 1 are correct, associationist connectionist models (such as ultralocal ones) yield the clearest alternatives to the LOT hypothesis. While it may be that such models cannot provide a general account of cognition, they may account for important aspects of cognition, such as low-level perception (e.g., with the interactive activation model of reading) or the mechanisms which distinguish experts from novices at a given skill (e.g., with dependency-network models). Since these models stand a fighting chance of (...) being applicable to some aspects of cognition, it is important from a philosophical standpoint that we have appropriate tools for understanding such models. In particular, we want to have a theory of the semantic content of representations in certain connectionist models. In this chapter, I want to consider the prospects for applying a specific sort of "fine-grained" theory of content to such models. (shrink)
Meet longtime Tarot reader and renowned occultist Renée O’Cards. Wracked with guilt over her epistemic irresponsibility, seized with fear of being deceived by a malignant demon, and prone to escape into sleep and dreams for unknown time periods, she turns to the consolation of First Philosophy.
Since my proposed framework for meaning (in Holist" and Atomist") is neither simply a psychosemantic holism nor simply a psychosemantic atomism, but a marriage in which the two have become one, we might call it a psychosemantic holism-atomism wedlock (PSHAW). In this paper I want to.
Most Americans believe what our media tell them, that Israel is a nation under attack by Palestinians. That is a lie. The truth is that Israel is a nation bent on driving Palestinians from their land through economic hardship, confiscation, humiliation, intimidation, and by killing them. Israel has maintained a brutal and illegal occupation of the West Bank and Gaza Strip for decades, not unlike the German occupation of Europe during World War II.
From its humble origins labeling a technical annoyance for a particular AI formalism, the term "frame problem" has grown to cover issues confronting broader research programs in AI. In philosophy, the term has come to encompass allegedly fundamental, but merely superficially related, objections to computational models of mind in AI and beyond.