Perhaps because both explanation and prediction are key components to understanding, philosophers and psychologists often portray these two abilities as though they arise from the same competence, and sometimes they are taken to be the same competence. When explanation and prediction are associated in this way, they are taken to be two expressions of a single cognitive capacity that differ from one another only pragmatically. If the difference between prediction and explanation of human behavior is merely pragmatic, then anytime I (...) predict someone’s future behavior, I would at that moment also have an explanation of the behavior. I argue that advocates of both the theory theory and the simulation theory accept the symmetry of psychological prediction and explanation. However, there is very good reason to believe that this hypothesis is false. Just as we can predict the occurrence of some physical phenomena that we have no explanation for, we are also able to make accurate predictions of intentional behavior without having an explanation. Rather than requiring mental state attribution, I argue that the prediction of human behavior is most often accomplished by statistical induction rather than through an appeal to mentalstates. However, explanations are not given in these terms. (shrink)
By what types of properties do we specify twinges, toothaches, and other kinds of mentalstates? Wittgenstein considers two methods. Procedure one, direct, private acquaintance: A person connects a word to the sensation it specifies through noticing what that sensation is like in his own experience. Procedure two, outward signs: A person pins his use of a word to outward, pre-verbal signs of the sensation. I identify and explain a third procedure and show we in fact specify many (...) kinds of mentalstates in this way. (shrink)
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mentalstates and the functional relations between them, we should reject these assumptions.
Functionalists think an event's causes and effects, its 'causal role', determines whether it is a mental state and, if so, which kind. Functionalists see this causal role principle as supporting their orthodox materialism, their commitment to the neuroscientist's ontology. I examine and refute the functionalist's causal principle and the orthodox materialism that attends that principle.
You are asked to call out the letters on a chart during an eyeexamination: you see and then read out the letters ‘U’, ‘R’, and ‘X’. Commonsense says that your perceptual experiences causally control your calling out the letters. Or suppose you are playing a game of chess intent on winning: you plan your strategy and move your chess pieces accordingly. Again, commonsense says that your intentions and plans causally control your moving the chess pieces. These causal judgements are as (...) plain and evident as any can be. (shrink)
Jerry Fodor now holds (1990) that the content of mental state types opaquely taxonomized (de dicto content: DDC) is determined by the 'orthographical' syntax + the computational/functional role of such states. Mentalstates whose tokens are both orthographically and truth-conditionally identical may be different with regard to the computational/functional role played by their respective representational cores. This make them tantamount to different contentful states, i.e. states with different DDCs, insofar as they are opaquely taxonomized. (...) Indeed they cannot both be truthfully ascribed to a single subject at the same time. Some years ago (1987), Fodor postulated a notion of mental content which also went beyond that of a mental state's truth-conditions. States whose tokens differ in their truth-conditions, or broad content, might, he claimed, still share a narrow content (NC), which was causally responsible for the shared behavior of the subjects of these states. For instance, two molecularly identical individuals, living in environments in all respects the same, except for the chemical substance of the phenomenically indistinguishable liquids filling their respective lakes and rivers, would behave similarly when having truth-conditionally different thoughts regarding those liquids. According to Fodor, this sameness of behavior was causally dependent on the sameness of the NC of the two individuals' truth-conditionally different thoughts. Now, this way of individuating mentalstates is still of interest for semantics. Indeed, NC allows one contextually to fix the broad content of a mental state token. Echoing Kaplan's notion of character,1 Fodor explained NC as a function that mapped contexts (of thought) onto broad contents. NC was thus invoked by Fodor mainly in order to account for sameness of intentional behavior. But DDC also plays a role in explaining intentional behavior, precisely by explaining why a subject whose thought-tokens have identical truthconditions may behave differently.. (shrink)
HOST is the theory that to be conscious of a mental state is totarget it with a higher-order state (a `HOS'), either an innerperception or a higher-order thought. Some champions of HOSTmaintain that the phenomenological character of a sensory stateis induced in it by representing it with a HOS. I argue that thisthesis is vulnerable to overwhelming objections that flow largelyfrom HOST itself. In the process I answer two questions: `What isa plausible sufficient condition for a quale's belonging to (...) aparticular mental state?' and `What is the propositional contentof HOSs that target sensory states?'. (shrink)
Philosophers and psychologists have often maintained that in order to attribute mentalstates to other people one must have a ‘theory of mind’. This theory facilitates our grasp of other people’s mentalstates. Debate has then focussed on the form this theory should take. Recently a new approach has been suggested, which I call the ‘Direct Perception approach to social cognition’. This approach maintains that we can directly perceive other people’s mentalstates. It opposes (...) traditional views on two counts: by claiming that mentalstates are observable and by claiming that we can attribute them to others without the need for a theory of mind. This paper argues that there are two readings of the direct perception claims: a strong and a weak one. The Theory-theory is compatible with the weak version but not the strong one. The paper argues that the strong version of direct perception is untenable, drawing on evidence from the mirror neuron literature and arguments from the philosophy of science and perception to support this claim. It suggests that one traditional ‘theory of mind’ view, the ‘Theory-theory’ view, is compatible with the claim that mentalstates are observable, and concludes that direct perception views do not offer a viable alternative to theory of mind approaches to social cognition. (shrink)
Richard Scheer has recently argued against what he calls the 'mental state' theory of intentions. He argues that versions of this theory fail to account for various characteristics of intention. In this essay we reply to Scheer's criticisms and argue that intentions are mentalstates.
It is not unusual to consider linguistic communication as a type of action performed by an individual —the speaker— intended to influence the mental state of another individual —the addressee. It seems more unusual to reach an agreement on what should be the effect of such influence for the communication to be successful. According to the well-known Gricean view, the success of a communicative action depends precisely on the recognition by the addressee of the mental state of the (...) speaker. In this essay, we want to analyse these mentalstates; however our main concern is not with the mentalstates of the agents in an isolated communicative action, but the mentalstates of the agents in a broader linguistic action, namely, conversation. (shrink)
The emergence of mentalstates from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-deﬁned mentalstates provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of ﬁnite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable ﬁxed points or limit tori whereas irreducible shifts are obtained from generating partitions of (...) mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the deﬁnition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or uniﬁcation of cognitive science or psychology, respectively, will be indicated. (shrink)
A comprehensive theory of implicit and explicit knowledge must explain phenomenal knowledge (e.g., knowledge regarding one's affective and motivational states), as well as propositional (i.e., “fact”-based) knowledge. Findings from several research areas (i.e., the subliminal mere exposure effect, artificial grammar learning, implicit and self-attributed dependency needs) are used to illustrate the importance of both phenomenal and propositional knowledge for a unified theory of implicit and explicit mentalstates.
Dienes & Perner's target article constitutes a significant advance in thinking about implicit knowledge. However, it largely neglects processing details and thus the time scale of mentalstates realizing propositional attitudes. Considering real-time processing raises questions about the possible brevity of implicit representation, the nature of processes that generate explicit knowledge, and the points of view from which knowledge may be represented. Understanding the propositional attitude analysis in terms of momentary mentalstates points the way toward (...) answering these questions. (shrink)
Abstract In the first section of the paper I present Alan Turing?s notion of effective memory, as it appears in his 1936 paper ?On Computable Numbers, With an Application to The Entscheidungsproblem?. This notion stands in surprising contrast with the way memory is usually thought of in the context of contemporary computer science. Turing?s view (in 1936) is that for a computing machine to remember a previously scanned string of symbols is not to store an internal symbolic image of this (...) string. Rather, memory consists in the fact that the past scanning of the string affects the behavior of the computer in the face of potential future inputs. In the second, central section of the paper I begin exploring how this view of Turing?s bears upon contemporary discussions in the philosophy of mind. In particular, I argue that Turing?s approach can be used to lend support to dispositional conceptions of the propositional attitudes, like the one recently presented by Matthews (2007), and that his effective memory manifests some of the characteristics of Millikan?s (1996) pushmepullyou mentalstates. (shrink)
Abstract. This paper is concerned with the mental processes involved in intentional communication. I describe an agent's cognitive architecture as the set of cognitive dynamics (i.e., sequences of mentalstates with contents) she may entertain. I then describe intentional communication as one such specific dynamics, arguing against the prevailing view that communication consists in playing a role in a socially shared script. The cognitive capabilities needed for such dynamics are midreading (i.e., the ability to reason upon another (...) individual's mentalstates), and communicative planning (i.e., the ability to dynamically represent and act in a communicative situation). (shrink)
Claims regarding collective or group mentalstates are fairly commonplace: we speak of things like the belief of the Church, the will of the faculty, and the opinion of the Supreme Court, often without considering what such claims really mean and whether they are true in any interesting sense. In this paper I take a threefold approach: first, I articulate several ways in which a group might be said to have beliefs and other mentalstates. Second, (...) I explore the implications, positive and negative, of these accounts of collective mentalstates. Third, I give a brief defense of my own view despite its somewhat disturbing implications for our membership in Church, State, and other groups. (shrink)
In this paper I distinguish two types of mental causation, called 'higher-level causation' and 'exploitation'. These notions superficially resemble the traditional problematic notions of supervenient causation and downward causation, but they are different in crucial respects. My new distinction is supported by a radically externalist competitor of the so-called Standard View of mentalstates, i.e. the view that mentalstates are brain states. I argue that on the Alternative View, the notions of 'higher-level causation' (...) and 'exploitation' can in combination dissolve the problem of mental causation as standardly discussed. (shrink)
This paper engages the extended cognition controversy by advancing a theory which fits nicely into an attractive and surprisingly unoccupied conceptual niche situated comfortably between traditional individualism and the radical externalism espoused by the majority of supporters of the extended mind hypothesis. I call this theory moderate active externalism, or MAE. In alliance with other externalist theories of cognition, MAE is committed to the view that certain cognitive processes extend across brain, body, and world—a conclusion which follows from a theory (...) I develop in “Synergic Coordination: an argument for cognitive process externalism.” Yet, in contradistinction with radical externalism, and in agreement with the internalist orthodoxy, MAE defends the view that mentalstates are situated invariably inside our heads. This is done, inter alia, by developing a novel hypothesis regarding the vehicles of content (in “Extended cognition without externalized mentalstates”, and by criticizing arguments in support of mentalstates externalism (in “Reflections and objections”). The result, I believe, is a coherent theoretical alternative worthy of serious consideration. (shrink)
A ‘Radical Simulationist’ account of how folk psychology functions has been developed by Robert Gordon. I argue that Radical Simulationism is false. In its simplest form it is not sufficient to explain our attribution of mentalstates to subjects whose desires and preferences differ from our own. Modifying the theory to capture these attributions invariably generates innumerable other false attributions. Further, the theory predicts that deficits in mentalizing ought to co-occur with certain deficits in imagining perceptually-based scenarios. I (...) present evidence suggesting that this prediction is false, and outline further possible empirical tests of the theory. (shrink)
Knowledge is standardly taken to be belief that is both true and justified (and perhaps meets other conditions as well). Timothy Williamson rejects the standard epistemology for its inability to solve the Gettier problem. The moral of this failure, he argues, is that knowledge does not factor into a combination that includes a mental state (belief) and an external condition (truth), but is itself a type of mental state. Knowledge is, according to his preferred account, the most general (...) factive mental state. I argue, however, that Gettier cases pose a serious problem for Williamson’s epistemology: in these cases, thesubject may have a factive mental state that fails to be cognitive. Hence, knowledge cannot be the most general factive mental state. (shrink)
In AI, consciousness of self consists in a program having certain kinds of facts about its own mental processes and state of mind. We discuss what consciousness of its own mental structures a robot will need in order to operate in the common sense world and accomplish the tasks humans will give it. It's quite a lot. Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found (...) feasible and useful in limited contexts. We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind. A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally. Gödel's idea of relative consistency is used to formalize non-knowledge. Programs with the kind of consciousness discussed in this article do not yet exist, although programs with some components of it exist. Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior. (shrink)
We propose a distinction between precategorial, acategorial and categorial states within a scientiﬁcally oriented understanding of mental processes. This distinction can be speciﬁed by approaches developed in cognitive neuroscience and the analytical philosophy of mind. On the basis of a representational theory of mental processes, acategoriality refers to a form of knowledge that presumes fully developed categorial mental representations, yet refers to nonconceptual experiences in mentalstates beyond categorial states. It relies on a (...) simultaneous experience of potential individual representations and their actual “representational ground”, an undiﬀerentiated precategorial state. This simultaneity is possible if the mental state does not reside in a representation but in between representations. Acategoriality can be formally modeled as an unstable state of a dynamical mental system that is subject to particular stability criteria. (shrink)
The debate between the theory-theory and simulation has largely ignored issues of cognitive architecture. In the philosophy of psychology, cognition as symbol manipulation is the orthodoxy. The challenge from connectionism, however, has attracted vigorous and renewed interest. In this paper I adopt connectionism as the antecedent of a conditional: If connectionism is the correct account of cognitive architecture, then the simulation theory should be preferred over the theory-theory. I use both developmental evidence and constraints on explanation in psychology to support (...) this claim. (shrink)
Some materialists argue that we can eliminate mental entities such as sensations because, like electrons, they are theoretical entities postulated as parts of scientific explanations, but, unlike electrons, they are unnecessary for such explanations. As Quine says, any explanatory role of mental entities can be played by "correlative physiological states and events instead." But sensations are not postulated theoretical entities. This is shown by proposing definitions of the related terms, 'observation term,' and 'theoretical term,' and then classifying (...) the term 'sensation.' The result is that although 'sensation' is a theoretical term, it is also a reporting term because it is used to refer to phenomena we are aware of. Consequently sensations are not postulated and cannot be eliminated merely because they are unnecessary for explanation. (shrink)
This paper argues that contemporary philosophy of mind and action could learn much from the structure of action explanation manifested in ancient Greek tragedy, which is less deterministic than typically supposed and which does not conflate the motivation of action with its causal production.
In the philosophical literature on mentalstates, the paradigmatic examples of mentalstates are beliefs, desires, intentions, and phenomenal states such as being in pain. The corresponding list in the psychological literature on mental state attribution includes one further member: the state of knowledge. This article examines the reasons why developmental, comparative and social psychologists have classified knowledge as a mental state, while most recent philosophers--with the notable exception of Timothy Williamson-- have not. (...) The disagreement is traced back to a difference in how each side understands the relationship between the concepts of knowledge and belief, concepts which are understood in both disciplines to be closely linked. Psychologists and philosophers other than Williamson have generally have disagreed about which of the pair is prior and which is derivative. The rival claims of priority are examined both in the light of philosophical arguments by Williamson and others, and in the light of empirical work on mental state attribution. (shrink)
It is argued that Nozick's experience machine thought experiment does not pose a particular difficulty for mental state theories of well-being. While the example shows that we value many things beyond our mentalstates, this simply reflects the fact that we value more than our own well-being. Nor is a mental state theorist forced to make the dubious claim that we maintain these other values simply as a means to desirable mentalstates. Valuing more (...) than our mentalstates is compatible with maintaining that the impact of such values upon our well-being lies in their impact upon our mental lives. (shrink)
We argue that the causal account offered by analytic functionalism provides the best account of the folk psychological theory of mind, and that people ordinarily define mentalstates relative to the causal roles these states occupy in relation to environmental impingements, external behaviors, and other mentalstates. We present new empirical evidence, as well as review several key studies on mental state ascription to diverse types of entities such as robots, cyborgs, corporations and God, (...) and explain how this evidence supports a functional account. We also respond to two challenges to this view based on the embodiment hypothesis, or the claim that physical realizers matter over and above functional role, and qualia. In both cases we conclude that research to date best supports a functional account of ordinary mental state concepts. (shrink)
It is widely held that there is an important distinction between the notion of consciousness as it is applied to creatures and, on the other hand, the notion of consciousness as it applies to mentalstates. McBride has recently argued in this journal that whilst there may be a grammatical distinction between state consciousness and creature consciousness, there is no parallel ontological distinction. It is argued here that whilst state consciousness and creature consciousness are indeed related, they are (...) distinct properties. Conscious creatures can have, at one time, both conscious and unconscious mentalstates. This raises the question of what distinguishes the conscious from unconscious mentalstates of a subject: a question about what state consciousness consists in. Whilst the state/creature distinction may not be of use in explaining every aspect of a subject's consciousness, it does provide a key part of the explanandum for theories of consciousness and mind. The state/creature consciousness distinction is a real one and should not be dropped from our psychological taxonomy. (shrink)
Fred Dretske?s (1988) account of the causal role of intentional mentalstates was widely criticized for missing the target: he explained why a type of intentional state causes the type of bodily motion it does rather than some other type, when what we wanted was an account of how the intentional properties of these states play a causal role in each singular causal relation with a token bodily motion. I argue that the non-reductive metaphysics that Dretske defends (...) for his account of behavior can be extended to the case of intentional states, and that this extension provides a way to show how intentional properties can play the causal role that we wanted explained. (shrink)
Psychogenic depersonalization is an altered mental state consisting of an unusual discontinuity in the phenomenological perception of personal being; the individual is engulfed by feelings of unreality, self-detachment and unfamiliarity in which the self is felt to lack subjective perspective and the intuitive feeling of personal embodiment. A new sub-feature of depersonalization is delineated. 'Prosthesis' consists in the thought that the thinker is a 'mere thing'. It is a subjectively realized sense of the specific and objective 'thingness' of the (...) particular object thought about. I show that prosthesis is an important cognitive feature of depersonalization, and may be psychologically connected with the tendency of depersonalized individuals to report 'philosophical' types of thinking. Indeed, several philosophical issues concerning the identity of the self appear to have been enhanced by prosthesis experiences. Thus, far more efficient than William James's experimental attempts to uncover philosophical truths under the influence of nitrous oxide intoxication, prosthesis may be a safe and recommended experience for philosophers. The history of depersonalization theories is presented from Krishaber to Freud, and the main approaches to prosthesis criticized. Finally, a fresh approach to psychogenic depersonalization is outlined on the basis of certain cognitive similarities with visual agnosia. This paper may be understood as continuing the Jamesian tradition 'experimental abnormal psychology', that is, of examining extraordinary mentalstates with an eye to their philosophical implications. (shrink)
Relatively poor memory for dreams is important evidence for Hobson et al.'s model of conscious states. We describe the time-gap experience as evidence that everyday memory for waking states may not be as good as they assume. As well as being surprisingly sparse, everyday memories may themselves be systematically distorted in the same manner that Revonsuo attributes uniquely to dreams. [Hobson et al.; Revonsuo].
The causal theory of action had suffered from inattention or linguistically motivated rejection until it was revived in 1963 by Donald Davidson. Since then the causal theory has had a continuing acceptance without having had an inspection of its assumptions. There are reasons to suspect that the theory is as unfounded as it is undoubted. Those reasons are reviewed here which have to do with the definitive moment when states such as beliefs and desires must change character to become (...) causal events. (shrink)
Consciousness and intentionality are perhaps the two central phenomena in the philosophy of mind. Human beings are conscious beings: there is something it is like to be us. Human beings are intentional beings: we represent what is going on in the world.Correspondingly, our specific mentalstates, such as perceptions and thoughts, very often have a phenomenal character: there is something it is like to be in them. And these mentalstates very often have intentional content: they (...) serve to represent the world. On the face of it, consciousness and intentionality are intimately connected. Our most important conscious mentalstates are intentional states: conscious experiences often inform us about the state of the world. And our most important intentional mentalstates are conscious states: there is often something it is like to represent the external world. It is natural to think that a satisfactory account of consciousness must respect its intentional structure, and that a satisfactory account of intentionality must respect its phenomenological character.With this in mind, it is surprising that in the last few decades, the philosophical study of consciousness and intentionality has often proceeded in two independent streams. This wasnot always the case. In the work of philosophers from Descartes and Locke to Brentano and Husserl, consciousness and intentionality were typically analyzed in a single package. But in the second half of the twentieth century, the dominant tendency was to concentrate on onetopic or the other, and to offer quite separate analyses of the two. On this approach, the connections between consciousness and intentionality receded into the background.In the last few years, this has begun to change. The interface between consciousness and intentionality has received increasing attention on a number of fronts. This attention has focused on such topics as the representational content of perceptual experience, the higherorder representation of conscious states, and the phenomenology of thinking. Two distinct philosophical groups have begun to emerge. One group focuses on ways in which consciousness might be grounded in intentionality. The other group focuses on ways in which intentionality might be grounded in consciousness. (shrink)
Proponents of non-conceptual content have recruited it for various philosophical jobs. Some epistemologists have suggested that it may play the role of “the given” that Sellars is supposed to have exorcised from philosophy. Some philosophers of mind (e.g., Dretske) have suggested that it plays an important role in the project of naturalizing semantics as a kind of halfway between merely information bearing and possessing conceptual content. Here I will focus on a recent proposal by Jerry Fodor. In a recent paper (...) he characterizes non-conceptual content in a particular way and argues that it is plausible that it plays an explanatory role in accounting for certain auditory and visual phenomena. So he thinks that there is reason to believe that there is non-conceptual content. On the other hand, Fodor thinks that non-conceptual content has a limited role. It occurs only in the very early stages of perceptual processing prior to conscious awareness. My paper is examines Fodor’s characterization of non-conceptual content and his claims for its explanatory importance. I also discuss if Fodor has made a case for limiting non-conceptual content to non-conscious, sub-personal mentalstates. (shrink)
The paper contains an argument against functionalist theories of consciousness. The argument exploits an intuition to the effect that parts of an individual's brain (or of whatever else might realize the individual's mentalstates, processes, etc.) that are not in use at a time t, can have no bearing on whether that individual is conscious at t. After presenting the argument, I defend it against two possible objections, and then distinguish it from two arguments to which it appears, (...) on the surface to be similar. (shrink)
In this paper I distinguish three alternatives to the functionalist account of qualitative states such as pain. The physicalist-functionalist holds that (1) there could be subjects functionally equivalent to us whose mentalstates differed in their qualitative character from ours, (2) there could be subjects functionally equivalent to us whose mentalstates lacked qualitative character altogether and (3) there could not be subjects like us in all objective respects whose qualitative states differed from ours. (...) The physicalist-functionalist holds (1) and (3) but denies (2). The transcendentalist holds (1) and (2) and denies (3). I argue that both versions of physicalist-functionalism inherit the problem of property dualism which originally helped to motivate functionalist theories of mind. I also argue that neither version of physicalist-functionalism can distinguish in a principled way between those neurophysiological properties of a subject which are relevant to the qualitative character of that subject's mentalstates and those which are not. I conclude that the only alternative to a functionalist account of qualitative states is a transcendentalist account and that this alternative is not likely to appeal to the critics of functionalism. (shrink)
People are minded creatures; we have thoughts, feelings and emotions. More intriguingly, we grasp our own mentalstates, and conduct the business of ascribing them to ourselves and others without instruction in formal psychology. How do we do this? And what are the dimensions of our grasp of the mental realm? In this book, Alvin I. Goldman explores these questions with the tools of philosophy, developmental psychology, social psychology and cognitive neuroscience. He refines an approach called simulation (...) theory, which starts from the familiar idea that we understand others by putting ourselves in their mental shoes. Can this intuitive idea be rendered precise in a philosophically respectable manner, without allowing simulation to collapse into theorizing? Given a suitable definition, do empirical results support the notion that minds literally create (or attempt to create) surrogates of other peoples mentalstates in the process of mindreading? Goldman amasses a surprising array of evidence from psychology and neuroscience that supports this hypothesis. (shrink)
There is not a uniform kind of consciousness common to all conscious mentalstates: beliefs, emotions, perceptual experiences, pains, moods, verbal thoughts, and so on. Instead, we need a distinction between phenomenal and nonphenomenal consciousness. As if consciousness simpliciter were not mysterious enough, philosophers have recently focused their worries on phenomenal (or qualitative) consciousness, the kind that explains or constitutes there being "something it.
The so-called unity of consciousness consists in the compelling sense we have that all our conscious mentalstates belong to a single conscious subject. Elsewhere I have argued that a mental state's being conscious is a matter of our being conscious of that state by having a higher-order thought (HOT) about it. Contrary to what is sometimes argued, this HOT model affords a natural explanation of our sense that our conscious states all belong to a single (...) conscious subject. HOTs often group states together, so that each HOT is about a cluster of target states; single HOTs represent qualitative states as spatially unified and intentional states as unified inferentially. More important, each HOT makes one conscious of oneself in a seemingly immediate way, encouraging a sense of unity across HOTs. And the same considerations that make us assume that our first-person thoughts all refer to the same self apply also to HOTs; becoming conscious of our HOTs in introspection thus leads to a sense that our conscious states are unified in a single self. I argue that neither essential-indexical reference to oneself nor the alleged immunity to error through misidentification conflicts with this account. I close by discussing the apparent connection of unity with free agency. (shrink)
I respond to an argument presented by Daniel Povinelli and Jennifer Vonk that the current generation of experiments on chimpanzee theory of mind cannot decide whether chimpanzees have the ability to reason about mentalstates. I argue that Povinelli and Vonk’s proposed experiment is subject to their own criticisms and that there should be a more radical shift away from experiments that ask subjects to predict behavior. Further, I argue that Povinelli and Vonk’s theoretical commitments should lead them (...) to accept this new approach, and that experiments which offer subjects the opportunity to look for explanations for anomalous behavior should be explored. (shrink)
Following Quine, Davidson, and Dennett, I take mentalstates and linguistic meaning to be individuated with reference to interpretation. The regulative principle of ideal interpretation is to maximize rationality, and this accounts for the distinctiveness and autonomy of the vocabulary of agency. This rationality-maxim can accommodate empirical cognitive-psychological investigation into the nature and limitations of human mental processing. Interpretivism is explicitly anti-reductionist, but in the context of Rorty's neo-pragmatism provides a naturalized view of agents. The interpretivist strategy (...) affords a less despondent view of constructive philosophical activity than Rorty's own. (shrink)
The possibility that what looks red to me may look green to you has traditionally been known as "spectrum inversion." This possibility is thought to create difficulties for any attempt to define mentalstates in terms of behavioral dispositions or functional roles. If spectrum inversion is possible, then it seems that two perceptual states may have identical functional antecedents and effects yet differ in their qualitative content. In that case the qualitative character of the states could (...) not be functionally defined. (shrink)
Exploring intentionality from an externalist per- spective, I distinguish three kinds of intentionality in the case of seeing, which I call transparent, translucent, and opaque respec- tively. I then extend the distinction from seeing to knowing, and then to believing. Having explicated the three-fold distinction, I then critically explore some important consequences that follow from granting that (i) there are transparent and translucent in- tentional states and (ii) these intentional states are mentalstates. These consequences include: (...) ?rst, that existential opacity is neither the mark of intentionality nor of the mental; second, that Sellars has not shown that all intentionality is non-relational; third, that a key Quinean argument for semantic indeterminacy rests on a false premise; fourth, that perceptual experience is intentional on Alston. (shrink)
Philosophers, especially in recent years, have engaged in reflection upon the nature of experience. Such reflections have led them to draw a distinction between conscious and unconscious mentality in terms of whether or not it is like something to have a mental state. Reflection upon the history of psychology and upon contemporary cognitive science, however, identifies the distinction between conscious and unconscious mentalstates to be primarily one which is drawn in epistemic terms. Consciousness is an epistemic (...) not ion marking the special kind of first-person knowledge we have of our own mentalstates. Psychologists have found it expedient, for explanatory reasons, to ignore or reject the assumption that we have exhaustive first-person knowledge of our mentalstates and, in doing so, use the term â��unconsciousâ�� to indicate the peculiar epistemic status of certain mentalstates. It is argued that epistemic consciousness is distinct from the subjective-experiential notion of consciousness, from â��access-consciousnessâ�� and from higher-order thought conceptions of mental state consciousness, and that epistemic consciousness has an important role to play in philosophy of mind and in the history of psychology. (shrink)
In philosophy the term intentionality refers to the feature possessed by mentalstates of beingabout things others than themselves. A serious question has been how to explain the intentionality of mentalstates. This paper starts with linguistic representations, and explores how an organism might use linguistic symbols to represent other things. Two research projects of Sue Savage-Rumbaugh, one explicity teaching twopan troglodytes to use lexigrams intentionally, and the other exploring the ability of several members ofpan paniscus (...) to learn lexigram use and comprehension of English speech spontaneously when raised in an appropriate environment, are examined to explore the acquisition process. Although it is controversial whether intentionality of mentalstates or linguistic symbols is primary, it is argued that the intentionality of linguistic symbols is primary and that studying how organisms learn to use linguistic symbols provides an avenue to understanding how intentionality is acquired by cognitive systems. (shrink)
In this paper I highlight certain logical and metaphysical issues which arise in the characterisation of functionalism-in particular its ready coherence with a physicalist ontology, its structuralism and the impredicativity of functionalist specifications. I then utilise these points in an attempt to demonstrate fatal flaws in the functionalist programme. I argue that the brand of functionalism inspired by David Lewis fails to accommodate multiple realisability though such accommodation was vaunted as a key improvement over the identity theory. More standard accounts (...) of functionalism allow, by contrast, for far too much multiple realisability. Specifically, functionalist structures will be massively reduplicated in the human brain; so functionalism yields the absurd consequence that each human harbours large numbers of minds and exemplifies virtually all mentalstates. (shrink)
This is an amended version of material that first appeared in A. Clark, Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing (MIT Press, Cambridge, MA, 1989), Ch. 1, 2, and 6. It appears in German translation in Metzinger,T (Ed) DAS LEIB-SEELE-PROBLEM IN DER ZWEITEN HELFTE DES 20 JAHRHUNDERTS (Frankfurt am Main: Suhrkamp. 1999).
In this essay I defend both the individual plausibility and conjoint consistency of two theses. One is the Intentionality Thesis: that all mentalstates are intentional (object-directed, exhibit ‘aboutness’). The other is the Self-Awareness Thesis: that if a subject is aware of an object, then the subject is also aware of being aware of that object. I begin by arguing for the individual prima facie plausibility of both theses. I then go on to consider a regress argument to (...) the effect that the two theses are incompatible. I discuss three responses to that argument, and defend one of them. (shrink)
Jerry Fodor has recently proposed a new entry into the list of information based approaches to semantic content aimed at explicating the general notion of representation for both mentalstates and linguistic tokens. The basic idea is that a token means what causes its production. The burden of the theory is to select the proper cause from the sea of causal influences which aid in generating any token while at the same time avoiding the absurdity of everything's being (...) literally meaningful (since everything has a cause). I argue that a detailed examination of the theory reveals that neither burden can be successfully shouldered. (shrink)
This paper argues that Twin Earth twins belong to the same psychological natural kind, but that the reason for this is not that the causal powers of mentalstates supervene on local neural structure. Fodor’s argument for this latter thesis is criticized and found to rest on a confusion between it and the claim that Putnamian and Burgean type relational psychological properties do not affect the causal powers of the mentalstates that have them. While it (...) is true that Putnamian and Burgean type relational psychological properties do not affect causal powers, it is false that no relational psychological properties do. Examples of relational psychological properties that do affect causal powers are given and psychological laws are sketched that subsume twins in virtue of them instantiating these relational properties rather than them sharing the narrow contents of their thoughts. (shrink)
Functionalism, the philosophical theory that defines mentalstates in terms of their causal relations to stimuli, overt behaviour, and other inner mentalstates, has often been accused of being unable to account for the qualitative character of our experimential states. Many times such objections to functionalism take the form of conceivability arguments. One is asked to imagine situations where organisms who are in a functional state that is claimed to be a particular experience either have (...) the qualitative character of that experience altered or absent altogether. Many of these arguments are surprisingly advanced by materialist philosophers. I argue that if the conceivability arguments were successful against functionalism, then they would be successful against their alternative materialist views as well. So the conceivability arguments alone do not provide a good reason for materialists to abandon functionalism. I further argue that functionalism is best understood to be an empirical theory, and if it is so understood then the conceivability arguments have no force against it at all. A further consequence that emerges is that on an empirical functionalist view, qualia, if real, are properties in the domain of psychology. (shrink)
In ‘Of Sensory Systems and the “Aboutness” of MentalStates’, Kathleen Akins (1996) argues against what she calls ‘the traditional view’ about sensory systems, according to which they are detectors of features in the environment outside the organism. As an antidote, she considers the case of thermoreception, a system whose sensors send signals about how things stand with themselves and their immediate dermal surround (a ‘narcissistic’ sensory system); and she closes by suggesting that the signals from many sensory (...) systems may not in any familiar sense be about anything at all. Her presentation of the issues, however, overlooks resources available to ‘the traditional view’—or so I shall argue. Akins’s own thumbnail sketch of what is wrong with the traditional view is that it asks, concerning a given sensory system, ‘what is it detecting?’, when we should instead be asking ‘what is it doing?’ (352). Her point is that on the traditional view the function of a sensory system—what it's ‘for’—is to detect or indicate (values of) features of the outside environment. But at least on one version of the traditional view—namely Ruth Millikan’s—this would never be the sole or main proper function of a sensory system. (Akins does not list Millikan as a traditionalist, but Millikan fits squarely Akins’s description of them, since she believes in a naturalistic theory of aboutness and thinks it should begin with the senses.) For Millikan (1989, 1993), the proper function of a sensory system is in the first instance enabling behavioural systems—in the simplest case, motor routines—to perform their proper function. This they do, roughly, by switching on and steering the behavioural routines. Where features of the outside environment come in is as Normal (= assumed-by-the-design) conditions for the successful performance of the sensory system's proper function. That is, the only strategy for switching on and steering that is simple enough for evolution to have hit upon it, and reliable enough for evolution to have liked it, is a strategy which gears the steering to (values of) features of the outside environment. But as soon as one starts fleshing out the details of this story, one notices that they are probably quite different in the case of thermoreception from how they are with ‘distance’ senses such as vision and olfaction--a point which Akins overlooks.. (shrink)
According to functionalism, mental state types consist solely in relations to inputs, outputs, and other mentalstates. I argue that two central claims of a prominent and plausible type of scientific realism conflict with the functionalist position. These claims are that natural kinds in a mature science are not reducible to natural kinds in any other, and that all dispositional features of natural kinds can be explained at the type-level. These claims, when applied to psychology, have the (...) consequence that at least some mental state types consist not merely in relations to inputs, outputs, and other mentalstates, but also in nonrelational properties that play a role in explaining functional relations. Consequently, a scientific realist of the sort I describe must reject functionalism. (shrink)
In this paper it is argued that functional role semantics can be saved from criticisms, such as those raised by Putnam and Fodor and Lepore, by indicating which beliefs and inferences are more constitutive in determining mental content. The Scylla is not to use vague expressions; the Charybdis is not to endorse the analytic/synthetic distinction. The core idea is to use reflective equilibrium as a strategy to pinpoint which are the beliefs and the inferences that constitute the content of (...) a mental state. The beliefs and the inferences that are constitutive are those that are in reflective equilibrium in the process of attributing mentalstates to others. (shrink)
Theories of what it is for a mental state to be conscious must answer two questions. We must say how we're conscious of our conscious mentalstates. And we must explain why we seem to be conscious of them in a way that's immediate. Thomas Natsoulas (1993) distinguishes three strategies for explaining what it is for mentalstates to be conscious. I show that the differences among those strategies are due to the divergent answers they (...) give to the foregoing questions. Natsoulas finds most promising the strategy that amounts to the higher-order-thought hypothesis that I've defended elsewhere. But he raises a difficulty for it, which he thinks probably can be met only by modifying that strategy. I argue that this is unnecessary. The difficulty is a special case of a general question, the answer to which is independent of any issues about consciousness. So it's no part of a theory of consciousness to address the problem, much less solve it. Moreover, the difficulty seems to have intuitive force only given the picture that underlies the other two explanatory strategies, which both Natsoulas and I reject. (shrink)
Zusammenfassung Chomsky behauptet, daÃ das BewuÃtsein die Struktur eines grammatischen Ãbersetzungsapparates hat, Freud dagegen betrachtet es als einen unbewuÃten Geisteszustand. Es wird gezeigt, wie sich diese Theorien innerhalb einer Metaphysik des BewuÃtseins vereinbaren lassen, die nur bewuÃte GeisteszustÃ¤nde als grundlegend, Sinneswahrnehmungen, Bilder, Emotionen und dergleichen als sekundÃ¤r, und veranlagungsbedingte (natÃ¼rliche) GeisteszustÃ¤nde als tertiÃ¤r bezeichnet. Hervorzuheben wÃ¤re, daÃ grammatische Ãbersetzungsapparate und unbewuÃte GeisteszustÃ¤nde, wie alle menschlichen Veranlagungen, als Eigenheiten des KÃ¶rpers, welcher gewissen Gesetzen und Prinzipien unterliegt, zu analysieren sind.
Simulation has emerged as an increasingly popular account of folk psychological (FP) talents at mind-reading: predicting and explaining human mentalstates. Where its rival (the theory-theory) postulates that these abilities are explained by mastery of laws describing the connections between beliefs, desires, and action, simulation theory proposes that we mind-read by "putting ourselves in another's shoes." This paper concerns connectionist architecture and the debate between simulation theory (ST) and the theory-theory (TT). It is only natural to associate TT (...) with classical architectures where rule governed operations apply to explicit propositional representations. On the other hand, ST would seem better tuned to procedurally oriented non-symbolic structures found in connectionist models. This paper explores the possible alignment between ST and connectionist architecture. Joe Cruz argues that connectionist models with distributed non-symbolic representations are particularly well suited to simulation theory. The purported linkage between connectionist architecture and simulation theory is criticized in this paper. The conclusion is that there are reasons for thinking that connectionist forms of representation are the enemy of both TT and ST. So the contribution of connectionism may be to suggest the need for an alternative to both views. (shrink)
The aim of this paper is to give a new argument for naturalized action theory. The sketch of the argument is the following: the immediate mental antecedents of actions, that is, the mentalstates that makes actions actions, are not normally accessible to introspection. But then we have no other option but to turn to the empirical sciences if we want to characterize and analyze them.
The argument from multiple realizability is that, because quite diverse physical systems are capable of giving rise to identical psychological phenomena, mentalstates cannot be reduced to physical states. This influential argument depends upon a theory of reduction that has been defunct in the philosophy of science for at least fifteen years. Better theories are now available.
If psychology requires a taxonomy that categorizes mentalstates according to their causal powers, the common sense method of individuating mentalstates (a taxonomy by intentional content) is unacceptable because mentalstates can have different intentional content, but identical causal powers. This difference threatens both the vindication of belief/desire psychology and the viability of scientific theories whose posits include intentional states. To resolve this conflict, Fodor has proposed that for scientific purposes mental (...)states should be classified by their narrow content. Such a classification is supposed to correspond to a classification by causal powers. Yet a state's narrow content is also supposed to determine its (broad) intentional content whenever that state is 'anchored' to a context. I examine the two most plausible accounts of narrow content implicit in Fodor's work, arguing that neither account can accomplish both goals. (shrink)
The apparent incompatibility of mentalstates with physical explanations has long been a concern of philosophers of psychology. This incompatibility is thought to arise from the intentionality of mentalstates. But, Brentano notwithstanding, intentionality is an ordinary feature of higher order behavior patterns in the classical literature of ethology.
Alfred R. Mele (2005). Action. In Frank Jackson & Michael Smith (eds.), The Oxford Handbook of Contemporary Philosophy. Oxford University Press.score: 45.0
What are actions? And how are actions to be explained? These two central questions of the philosophy of action call, respectively, for a theory of the nature of action and a theory of the explanation of actions. Many ordinary explanations of actions are offered in terms of such mentalstates as beliefs, desires, and intentions, and some also appeal to traits of character and emotions. Traditionally, philosophers have used and refined this vocabulary in producing theories of the explanation (...) of intentional actions. An underlying presupposition is that common-sense explanations expressed in these terms have proved very useful. People understand their own and others' actions well enough to coordinate and sustain complicated, cooperative activities integral to normal human life, and that understanding is expressed largely in a common-sense psychological vocabulary. This article focuses on these issues. (shrink)