The approaches in question here are exhibited in examinations of specific problems, rather than surveyed or generally summarized. Most of the volume should interest philosophers. Recent linguistic theory has been torn between the generative semanticists, who fuse syntax and semantics in maintaining that "the rules of grammar are identical to the rules relating surface forms to their corresponding logical forms", and the interpretive semanticists, who find syntactic deep structure a well-defined notion and who believe that the semantic interpretation of sentences (...) derives from inputs from several levels of linguistic structure. J. Bresnan, in "Sentence Stress and Syntactic Transformations," gives a clear and elegant version of her defense of one aspect of the interpretivist position. She argues that aspects of the stress pattern of sentences can be easily and compactly explained only if lexical items are inserted at the level of syntactic deep structure before the application of syntactical transformations. W. C. Watt’s "Late Lexicalizations" argues the generativist position that the lexical peculiarities of natural languages tend to be introduced at various stages in the application of syntactical transformations. Bresnan’s paper is particularly helpful to philosophers who want to make sense of linguist’s current arguments: her evidential appeals, reasoning, and terminology can be grasped by someone with little background in technical linguistics. The volume also includes three papers, two by Hamburger and Wexler and one by Peters and Ritchie, on the abstract theory of grammar, which has come some distance since Chomsky’s contributions. These papers follow out various aspects of the realization that, when abstractly considered, transformational, and even somewhat less powerful rules, are too powerful to allow nonarbitrary solutions to the problem of identifying the grammars of particular languages. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. 1Introduction 2A Brief Primer on Neural Decoding: Methods, Application, and Interpretation 2.1What is multivariate pattern analysis? 2.2The informational benefits of multivariate pattern analysis 3Why the Decoder’s Dictum Is False 3.1We don’t know what information is decoded 3.2The theoretical basis for the dictum 3.3Undermining the theoretical basis 4Objections and Replies 4.1Does anyone really believe the dictum? 4.2Good decoding is not enough 4.3Predicting behaviour is not enough 5Moving beyond the Dictum 6Conclusion. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. (shrink)
This chapter situates the dispute over the metacognitive capacities of non-human animals in the context of wider debates about the phylogeny of metarepresentational abilities. This chapter clarifies the nature of the dispute, before contrasting two different accounts of the evolution of metarepresentation. One is first-person-based, claiming that it emerged initially for purposes of metacognitive monitoring and control. The other is social in nature, claiming that metarepresentation evolved initially to monitor the mental states of others. These accounts make differing predictions about (...) what we should expect to find in non-human animals: the former predicts that metacognitive capacities in creatures incapable of equivalent forms of mindreading should be found, whereas the latter predicts that they should not. The chapter elaborates and defend the latter form of account, drawing especially on what is known about decision-making and metacognition in humans. In doing so the chapter shows that so-called ‘uncertainty-monitoring’ data from monkeys can just as well be explained in non-metarepresentational affective terms, as might be predicted by the social-evolutionary account. (shrink)
ABSTRACTThe seminal work of David Marr, popularized in his classic work Vision, continues to exert a major influence on both cognitive science and philosophy. The interpretation of his work also co...
Humans have the capacity for awareness of many aspects of their own mental lives—their own experiences, feelings, judgments, desires, and decisions. We can often know what it is that we see, hear, feel, judge, want, or decide. This article examines the evolutionary origins of this form of self-knowledge. Two alternatives are contrasted and compared with the available evidence. One is first-person based: self-knowledge is an adaptation designed initially for metacognitive monitoring and control. The other is third-person based: self-knowledge depends on (...) the prior evolution of a mindreading system which can then be directed toward the self. It is shown that the latter account is currently the best supported of the two. (shrink)
We examined the extent to which the perceived changes in visual imagery colorfulness impact on the affect intensity associated with ordinary autobiographical events across time. We garnered support for the hypothesis that recent events become memorial phenomena via an emotion regulation process such that positive events retained their affective pleasantness longer than negative events retained affective unpleasantness because, in part, across 2 weeks the former retained their imagery colorfulness longer than the latter events did. A similar but distinct model was (...) unsupported. We discuss the significance of imagery colorfulness and affect intensity in the context of memory for everyday autobiographical events. (shrink)
Questions about the nature of the relationship between language and extralinguistic cognition are old, but only recently has a new view emerged that allows for the systematic investigation of claims about linguistic structure, based on how it is understood or utilized outside of the language system. Our paper represents a case study for this interaction in the domain of event semantics. We adopt a transparency thesis about the relationship between linguistic structure and extralinguistic cognition, investigating whether different lexico-syntactic structures can (...) differentially recruit the visual causal percept. A prominent analysis of causative verbs like move suggests reference to two distinct events and a causal relationship between them, whereas non-causative verbs like push do not so refer. In our study, we present English speakers with simple scenes that either do or do not support the perception of a causal link, and manipulate a one-sentence instruction for the evaluation of the scene. Preliminary results suggest that competent speakers of English are more likely to judge causative constructions than non-causative constructions as true of a scene where causal features are present in the scene. Implications for a new approach to the investigation of linguistic meanings and future directions are discussed. (shrink)
Chalmers argues for the following two principles: computational sufficiency and computational explanation. In this commentary I present two criticisms of Chalmers’ argument for the principle of computational sufficiency, which states that implementing the appropriate kind of computational structure suffices for possessing mentality. First, Chalmers only establishes that a system has its mental properties in virtue of the computations it performs in the trivial sense that any physical system can be described computationally to some arbitrary level of detail; further argumentation is (...) required to show that the causal topology relevant to possessing a mind actually implements computations. Second, Chalmers' account rules out plausible cases of implementation due to its requirement of an isomorphism between the state-types of a computation and the physical system implementing the computation. (shrink)
The “hard problem” of consciousness is a challenge for explanations of the nature of our phenomenal experiences. Chalmers has claimed that physicalist solutions to the challenge are ill-suited due, in part, to the zombie argument against physicalism. Perry has suggested that the zombie argument begs the question against the physicalist, and presents no relevant threat to the view. Although seldom discussed in the literature, I show there is defensive merit to Perry’s “parry” of the zombie attack. The success of the (...) maneuver suggests a slight softening of the hard problem of consciousness for physicalists. (shrink)
Anderson claims that the hypothesis of massive neural reuse is inconsistent with massive mental modularity. But much depends upon how each thesis is understood. We suggest that the thesis of massive modularity presented in Carruthers (2006) is consistent with the forms of neural reuse that are actually supported by the data cited, while being inconsistent with a stronger version of reuse that Anderson seems to support.
Using survey data collected from chief executives of nonprofit organizations and financial performance information, the current study examined the influence of the individual chief executive characteristics on their perception of organization performance. The study found that executives with internal Locus of Control, high collectivism values, and analytical decision styles have greater convergence between their perceptions of performance and a financial measure. The study findings also offer support for existing theories that suggest executive cognitions play a significant role in filtering information, (...) ultimately influencing the accuracy of perceptions and the effectiveness of strategic choices. (shrink)
Preparing the Next Generation of Oral Historians is an invaluable resource to educators seeking to bring history alive for students at all levels. Filled with insightful reflections on teaching oral history, it offers practical suggestions for educators seeking to create curricula, engage students, gather community support, and meet educational standards. By the close of the book, readers will be able to successfully incorporate oral history projects in their own classrooms.