By what types of properties do we specify twinges, toothaches, and other kinds of mentalstates? Wittgenstein considers two methods. Procedure one, direct, private acquaintance: A person connects a word to the sensation it specifies through noticing what that sensation is like in his own experience. Procedure two, outward signs: A person pins his use of a word to outward, pre-verbal signs of the sensation. I identify and explain a third procedure and show we in fact specify many (...) kinds of mentalstates in this way. (shrink)
Perhaps because both explanation and prediction are key components to understanding, philosophers and psychologists often portray these two abilities as though they arise from the same competence, and sometimes they are taken to be the same competence. When explanation and prediction are associated in this way, they are taken to be two expressions of a single cognitive capacity that differ from one another only pragmatically. If the difference between prediction and explanation of human behavior is merely pragmatic, then anytime I (...) predict someone’s future behavior, I would at that moment also have an explanation of the behavior. I argue that advocates of both the theory theory and the simulation theory accept the symmetry of psychological prediction and explanation. However, there is very good reason to believe that this hypothesis is false. Just as we can predict the occurrence of some physical phenomena that we have no explanation for, we are also able to make accurate predictions of intentional behavior without having an explanation. Rather than requiring mental state attribution, I argue that the prediction of human behavior is most often accomplished by statistical induction rather than through an appeal to mentalstates. However, explanations are not given in these terms. (shrink)
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mentalstates and the functional relations between them, we should reject these assumptions.
Functionalists think an event's causes and effects, its 'causal role', determines whether it is a mental state and, if so, which kind. Functionalists see this causal role principle as supporting their orthodox materialism, their commitment to the neuroscientist's ontology. I examine and refute the functionalist's causal principle and the orthodox materialism that attends that principle.
You are asked to call out the letters on a chart during an eyeexamination: you see and then read out the letters ‘U’, ‘R’, and ‘X’. Commonsense says that your perceptual experiences causally control your calling out the letters. Or suppose you are playing a game of chess intent on winning: you plan your strategy and move your chess pieces accordingly. Again, commonsense says that your intentions and plans causally control your moving the chess pieces. These causal judgements are as (...) plain and evident as any can be. (shrink)
Jerry Fodor now holds (1990) that the content of mental state types opaquely taxonomized (de dicto content: DDC) is determined by the 'orthographical' syntax + the computational/functional role of such states. Mentalstates whose tokens are both orthographically and truth-conditionally identical may be different with regard to the computational/functional role played by their respective representational cores. This make them tantamount to different contentful states, i.e. states with different DDCs, insofar as they are opaquely taxonomized. (...) Indeed they cannot both be truthfully ascribed to a single subject at the same time. Some years ago (1987), Fodor postulated a notion of mental content which also went beyond that of a mental state's truth-conditions. States whose tokens differ in their truth-conditions, or broad content, might, he claimed, still share a narrow content (NC), which was causally responsible for the shared behavior of the subjects of these states. For instance, two molecularly identical individuals, living in environments in all respects the same, except for the chemical substance of the phenomenically indistinguishable liquids filling their respective lakes and rivers, would behave similarly when having truth-conditionally different thoughts regarding those liquids. According to Fodor, this sameness of behavior was causally dependent on the sameness of the NC of the two individuals' truth-conditionally different thoughts. Now, this way of individuating mentalstates is still of interest for semantics. Indeed, NC allows one contextually to fix the broad content of a mental state token. Echoing Kaplan's notion of character,1 Fodor explained NC as a function that mapped contexts (of thought) onto broad contents. NC was thus invoked by Fodor mainly in order to account for sameness of intentional behavior. But DDC also plays a role in explaining intentional behavior, precisely by explaining why a subject whose thought-tokens have identical truthconditions may behave differently.. (shrink)
HOST is the theory that to be conscious of a mental state is totarget it with a higher-order state (a `HOS'), either an innerperception or a higher-order thought. Some champions of HOSTmaintain that the phenomenological character of a sensory stateis induced in it by representing it with a HOS. I argue that thisthesis is vulnerable to overwhelming objections that flow largelyfrom HOST itself. In the process I answer two questions: `What isa plausible sufficient condition for a quale's belonging to (...) aparticular mental state?' and `What is the propositional contentof HOSs that target sensory states?'. (shrink)
Philosophers and psychologists have often maintained that in order to attribute mentalstates to other people one must have a ‘theory of mind’. This theory facilitates our grasp of other people’s mentalstates. Debate has then focussed on the form this theory should take. Recently a new approach has been suggested, which I call the ‘Direct Perception approach to social cognition’. This approach maintains that we can directly perceive other people’s mentalstates. It opposes (...) traditional views on two counts: by claiming that mentalstates are observable and by claiming that we can attribute them to others without the need for a theory of mind. This paper argues that there are two readings of the direct perception claims: a strong and a weak one. The Theory-theory is compatible with the weak version but not the strong one. The paper argues that the strong version of direct perception is untenable, drawing on evidence from the mirror neuron literature and arguments from the philosophy of science and perception to support this claim. It suggests that one traditional ‘theory of mind’ view, the ‘Theory-theory’ view, is compatible with the claim that mentalstates are observable, and concludes that direct perception views do not offer a viable alternative to theory of mind approaches to social cognition. (shrink)
Richard Scheer has recently argued against what he calls the 'mental state' theory of intentions. He argues that versions of this theory fail to account for various characteristics of intention. In this essay we reply to Scheer's criticisms and argue that intentions are mentalstates.
It is not unusual to consider linguistic communication as a type of action performed by an individual —the speaker— intended to influence the mental state of another individual —the addressee. It seems more unusual to reach an agreement on what should be the effect of such influence for the communication to be successful. According to the well-known Gricean view, the success of a communicative action depends precisely on the recognition by the addressee of the mental state of the (...) speaker. In this essay, we want to analyse these mentalstates; however our main concern is not with the mentalstates of the agents in an isolated communicative action, but the mentalstates of the agents in a broader linguistic action, namely, conversation. (shrink)
The emergence of mentalstates from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-deﬁned mentalstates provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of ﬁnite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable ﬁxed points or limit tori whereas irreducible shifts are obtained from generating partitions of (...) mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the deﬁnition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or uniﬁcation of cognitive science or psychology, respectively, will be indicated. (shrink)
A comprehensive theory of implicit and explicit knowledge must explain phenomenal knowledge (e.g., knowledge regarding one's affective and motivational states), as well as propositional (i.e., “fact”-based) knowledge. Findings from several research areas (i.e., the subliminal mere exposure effect, artificial grammar learning, implicit and self-attributed dependency needs) are used to illustrate the importance of both phenomenal and propositional knowledge for a unified theory of implicit and explicit mentalstates.
Dienes & Perner's target article constitutes a significant advance in thinking about implicit knowledge. However, it largely neglects processing details and thus the time scale of mentalstates realizing propositional attitudes. Considering real-time processing raises questions about the possible brevity of implicit representation, the nature of processes that generate explicit knowledge, and the points of view from which knowledge may be represented. Understanding the propositional attitude analysis in terms of momentary mentalstates points the way toward (...) answering these questions. (shrink)
Abstract In the first section of the paper I present Alan Turing?s notion of effective memory, as it appears in his 1936 paper ?On Computable Numbers, With an Application to The Entscheidungsproblem?. This notion stands in surprising contrast with the way memory is usually thought of in the context of contemporary computer science. Turing?s view (in 1936) is that for a computing machine to remember a previously scanned string of symbols is not to store an internal symbolic image of this (...) string. Rather, memory consists in the fact that the past scanning of the string affects the behavior of the computer in the face of potential future inputs. In the second, central section of the paper I begin exploring how this view of Turing?s bears upon contemporary discussions in the philosophy of mind. In particular, I argue that Turing?s approach can be used to lend support to dispositional conceptions of the propositional attitudes, like the one recently presented by Matthews (2007), and that his effective memory manifests some of the characteristics of Millikan?s (1996) pushmepullyou mentalstates. (shrink)
The title of the target article suggests an agenda for research on cognitive evolution that is doubly flawed. It implies that we can learn directly about animals' mentalstates, and its focus on human uniqueness impels a search for an existence proof rather than for understanding what components of given cognitive processes are shared among species and why.
Abstract. This paper is concerned with the mental processes involved in intentional communication. I describe an agent's cognitive architecture as the set of cognitive dynamics (i.e., sequences of mentalstates with contents) she may entertain. I then describe intentional communication as one such specific dynamics, arguing against the prevailing view that communication consists in playing a role in a socially shared script. The cognitive capabilities needed for such dynamics are midreading (i.e., the ability to reason upon another (...) individual's mentalstates), and communicative planning (i.e., the ability to dynamically represent and act in a communicative situation). (shrink)
Claims regarding collective or group mentalstates are fairly commonplace: we speak of things like the belief of the Church, the will of the faculty, and the opinion of the Supreme Court, often without considering what such claims really mean and whether they are true in any interesting sense. In this paper I take a threefold approach: first, I articulate several ways in which a group might be said to have beliefs and other mentalstates. Second, (...) I explore the implications, positive and negative, of these accounts of collective mentalstates. Third, I give a brief defense of my own view despite its somewhat disturbing implications for our membership in Church, State, and other groups. (shrink)
The opposition between behaviour- and mind-reading accounts of data on infants and non-human primates could be less dramatic than has been thought up to now. In this paper, I argue for this thesis by analysing a possible neuro-computational explanation of early mind-reading, based on a mechanism of associative generalization which is apt to implement the notion of mentalstates as intervening variables proposed by Andrew Whiten. This account allows capturing important continuities between behaviour-reading and mind-reading, insofar as both (...) are supposed to be just different kinds of generalization from perceptual experience. Specifically, I will argue that the projection of inner experiences to others which is involved in early mind-reading does not imply a computational leap beyond associative generalization from perceptual experience. (shrink)
The meaning and significance of Benjamin Libet’s studies on the timing of conscious will have been widely discussed, especially by those wishing to draw sceptical conclusions about conscious agency and free will. However, certain important correctives for thinking about mentalstates and processes undermine the apparent simplicity and logic of Libet’s data. The appropriateness, relevance and ecological validity of Libet’s methods are further undermined by considerations of how we ought to characterise intentional actions, conscious intention, and what it (...) means to act with conscious intent. Recent extensions of Libet’s paradigm using fMRI and decision-based tasks suffer from similar limitations. The result is that these sorts of laboratory studies of isolated, trivial, decontextualized bodily movements, in a context of extended (conscious) intentional experimental participation and cooperation, are of dubious and potentially misleading relevance to the study of agency. (shrink)
This paper relates intentionality, a central feature of human consciousness, with brain functions controlling adaptive action. Mental intentionality, understood as the “aboutness” of mentalstates, includes two modalities: semantic intentionality, the attribution of meaning to mentalstates, and projective intentionality, the projection of conscious content into the world. We claim that both modalities are the evolutionary product of self-organized action, and discuss examples of animal behavior that illustrate some stages of this evolution. The adaptive advantages (...) of self-organized action impacted on brain organization, leading to the formation of mammalian brain circuits that incorporate semantic intentionality in their modus operandi. Following the same line of reasoning, we suggest that projective intentionality could be explained as a result of habituation processes referenced to the dynamical interface of the body with the environment. (shrink)
In this paper I distinguish two types of mental causation, called 'higher-level causation' and 'exploitation'. These notions superficially resemble the traditional problematic notions of supervenient causation and downward causation, but they are different in crucial respects. My new distinction is supported by a radically externalist competitor of the so-called Standard View of mentalstates, i.e. the view that mentalstates are brain states. I argue that on the Alternative View, the notions of 'higher-level causation' (...) and 'exploitation' can in combination dissolve the problem of mental causation as standardly discussed. (shrink)
A ‘Radical Simulationist’ account of how folk psychology functions has been developed by Robert Gordon. I argue that Radical Simulationism is false. In its simplest form it is not sufficient to explain our attribution of mentalstates to subjects whose desires and preferences differ from our own. Modifying the theory to capture these attributions invariably generates innumerable other false attributions. Further, the theory predicts that deficits in mentalizing ought to co-occur with certain deficits in imagining perceptually-based scenarios. I (...) present evidence suggesting that this prediction is false, and outline further possible empirical tests of the theory. (shrink)
This paper engages the extended cognition controversy by advancing a theory which fits nicely into an attractive and surprisingly unoccupied conceptual niche situated comfortably between traditional individualism and the radical externalism espoused by the majority of supporters of the extended mind hypothesis. I call this theory moderate active externalism, or MAE. In alliance with other externalist theories of cognition, MAE is committed to the view that certain cognitive processes extend across brain, body, and world—a conclusion which follows from a theory (...) I develop in “Synergic Coordination: an argument for cognitive process externalism.” Yet, in contradistinction with radical externalism, and in agreement with the internalist orthodoxy, MAE defends the view that mentalstates are situated invariably inside our heads. This is done, inter alia, by developing a novel hypothesis regarding the vehicles of content (in “Extended cognition without externalized mentalstates”, and by criticizing arguments in support of mentalstates externalism (in “Reflections and objections”). The result, I believe, is a coherent theoretical alternative worthy of serious consideration. (shrink)
Knowledge is standardly taken to be belief that is both true and justified (and perhaps meets other conditions as well). Timothy Williamson rejects the standard epistemology for its inability to solve the Gettier problem. The moral of this failure, he argues, is that knowledge does not factor into a combination that includes a mental state (belief) and an external condition (truth), but is itself a type of mental state. Knowledge is, according to his preferred account, the most general (...) factive mental state. I argue, however, that Gettier cases pose a serious problem for Williamson’s epistemology: in these cases, thesubject may have a factive mental state that fails to be cognitive. Hence, knowledge cannot be the most general factive mental state. (shrink)
In AI, consciousness of self consists in a program having certain kinds of facts about its own mental processes and state of mind. We discuss what consciousness of its own mental structures a robot will need in order to operate in the common sense world and accomplish the tasks humans will give it. It's quite a lot. Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found (...) feasible and useful in limited contexts. We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind. A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally. Gödel's idea of relative consistency is used to formalize non-knowledge. Programs with the kind of consciousness discussed in this article do not yet exist, although programs with some components of it exist. Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior. (shrink)
The debate between the theory-theory and simulation has largely ignored issues of cognitive architecture. In the philosophy of psychology, cognition as symbol manipulation is the orthodoxy. The challenge from connectionism, however, has attracted vigorous and renewed interest. In this paper I adopt connectionism as the antecedent of a conditional: If connectionism is the correct account of cognitive architecture, then the simulation theory should be preferred over the theory-theory. I use both developmental evidence and constraints on explanation in psychology to support (...) this claim. (shrink)
We propose a distinction between precategorial, acategorial and categorial states within a scientiﬁcally oriented understanding of mental processes. This distinction can be speciﬁed by approaches developed in cognitive neuroscience and the analytical philosophy of mind. On the basis of a representational theory of mental processes, acategoriality refers to a form of knowledge that presumes fully developed categorial mental representations, yet refers to nonconceptual experiences in mentalstates beyond categorial states. It relies on a (...) simultaneous experience of potential individual representations and their actual “representational ground”, an undiﬀerentiated precategorial state. This simultaneity is possible if the mental state does not reside in a representation but in between representations. Acategoriality can be formally modeled as an unstable state of a dynamical mental system that is subject to particular stability criteria. (shrink)