We present the fundamentals of the quantum theoretical approach we have developed in the last decade to model cognitive phenomena that resisted modeling by means of classical logical and probabilistic structures, like Boolean, Kolmogorovian and, more generally, set theoretical structures. We firstly sketch the operational-realistic foundations of conceptual entities, i.e. concepts, conceptual combinations, propositions, decision-making entities, etc. Then, we briefly illustrate the application of the quantum formalism in Hilbert space to represent combinations of natural concepts, discussing its success in modeling (...) a wide range of empirical data on concepts and their conjunction, disjunction and negation. Next, we naturally extend the quantum theoretical approach to model some long-standing ‘fallacies of human reasoning’, namely, the ‘conjunction fallacy’ and the ‘disjunction effect’. Finally, we put forward an explanatory hypothesis according to which human reasoning is a defined superposition of ‘emergent reasoning’ and ‘logical reasoning’, where the former generally prevails over the latter. The quantum theoretical approach explains human fallacies as the consequence of genuine quantum structures in human reasoning, i.e. ‘contextuality’, ‘emergence’, ‘entanglement’, ‘interference’ and ‘superposition’. As such, it is alternative to the Kahneman–Tversky research programme, which instead aims to explain human fallacies in terms of ‘individual heuristics and biases’. (shrink)
A growing consensus in the philosophy and psychology of concepts is that while theories such as the prototype, exemplar, and theory theories successfully account for some instances of concept formation and application, none of them successfully accounts for all such instances. I argue against this ‘new consensus’ and show that the problem is, in fact, more severe: the explanatory force of each of these theories is limited even with respect to the phenomena often cited to support it, as each fails (...) to satisfy an important explanatory desideratum with respect to these phenomena. I argue that these explanatory shortcomings arise from a shared assumption on the part of these theories, namely, they take similarity judgements and application of causal knowledge to be discrete elements in a theory of concepts. I further propose that the same assumption carries over into alternative theories offered by proponents of the new consensus: pluralism, eliminativism, and hybrid theories. I put forth a sketch of an integrated model of concept formation and application, which rejects this shared assumption and satisfies the explanatory desiderata I discuss. I suggest that this model undermines the motivation for hybrid, pluralist, and eliminativist accounts of concepts. _1_ Introduction _2_ The Similarity-Based Approach and the Importance of Theory _2.1_ The similarity-based approach _2.2_ The selection desideratum _2.3_ Causal knowledge as satisfying the selection desideratum _3_ The Theory-Based Approach and the Importance of Similarity _3.1_ The theory-based approach _3.2_ The range desideratum _3.3_ Similarity as satisfying the range desideratum _4_ An Integrated Approach to Concepts _4.1_ An integrated model _4.2_ The integrated theory versus hybrid theories of concepts _5_ Conclusion. (shrink)
I take up three puzzles about our emotional and evaluative responses to fiction. First, how can we even have emotional responses to characters and events that we know not to exist, if emotions are as intimately connected to belief and action as they seem to be? One solution to this puzzle claims that we merely imagine having such emotional responses. But this raises the puzzle of why we would ever refuse to follow an author’s instructions to imagine such responses, since (...) we happily imagine many other implausible things. A natural response to this second puzzle is that our responses to fiction are real, and so can’t just be conjured up in response to an author’s demands. However, this simple response is inadequate, because we often respond differently to people and events in fiction than we would if we encountered them in real life. Solving these three puzzles in a consistent way requires the notion of a “perspective” on a fictional world. I sketch an account of this intuitive but frustratingly amorphous notion: perspectives are tools for organizing our thinking, which can in turn alter our emotional and evaluative responses. Cultivating a perspective can be illuminating, entertaining, or corrupting — or all three at once. (shrink)
The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called “semantic pointers.” Semantic pointers are symbol-like representations that result (...) from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts. (shrink)
Does expertise within a domain of knowledge predict accurate self-assessment of the ability to explain topics in that domain? We find that expertise increases confidence in the ability to explain a wide variety of phenomena. However, this confidence is unwarranted; after actually offering full explanations, people are surprised by the limitations in their understanding. For passive expertise, miscalibration is moderated by education; those with more education are accurate in their self-assessments. But when those with more education consider topics related to (...) their area of concentrated study, they also display an illusion of understanding. This “curse of expertise” is explained by a failure to recognize the amount of detailed information that had been forgotten. While expertise can sometimes lead to accurate self-knowledge, it can also create illusions of competence. (shrink)
Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how (...) some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. (shrink)
This paper presents a new response to the colour similarity argument, an argument that many people take to pose the greatest threat to colour physicalism. The colour similarity argument assumes that if colour physicalism is true, then colour similarities should be scrutable under standard physical descriptions of surface reflectance properties such as their spectral reflectance curves. Given this assumption, our evident failure to find such similarities at the reducing level seemingly proves fatal to colour physicalism. I argue that we should (...) dispense with this assumption, and thus endorse the inscrutability of colour similarity. This strategy is inspired by parallels between the colour similarity argument and the explanatory gap between mind and body made vivid by Jackson’s (1986) knowledge argument, and in particular by type-B physicalist responses to that argument. This inscrutability response is further motivated by cases in chemistry and biochemistry in which analogous scrutability theses fail to hold. Along the way, I present a challenge to standard formulations of the colour similarity argument based on the extreme context sensitivity of the similarity relation. Most presentations of the argument fail to control for such contextual variation, which raises the distinct possibility that the argument equivocates on the similarity relation across its premises. Although ultimately inconclusive, this context challenge forces a significant reformulation of the colour similarity argument, and highlights the need for much greater care in handling claims about colour similarity. (shrink)
The paper begins by drawing a number of ‘levels’ distinctions in epistemology. It notes that a theory of knowledge must be an attempt to obtain knowledge . It is suggested that we can make sense of much of the work found in analytic theory of knowledge by seeing three framework assumptions as underpinning this work. First, that to have philosophical knowledge of knowledge requires us to have an analysis. Second, that much of what we require from a theory of knowledge (...) may be obtained by developing such analyses of first-order, concrete, empirical, propositional knowledge. Third, that the final arbiter of the correctness of such analyses is to be the carefully examined intuitions of the epistemologist. The paper attacks each aspect of this framework on the ground that this methodology will precisely not give us knowledge . In passing, comparisons are drawn with arguments that led to the demise of phenomenalism. The paper concludes with remarks about realism/anti-realism and consensus/disagreement in analytic epistemology. The paper recommends that we seek to develop theories of knowledge rather than analyses, and defends the position that such theories will precisely not be analyses. (shrink)
Critics of the target article objected to our account of art appreciators' sensitivity to art-historical contexts and functions, the relations among the modes of artistic appreciation, and the weaknesses of aesthetic science. To rebut these objections and justify our program, we argue that the current neglect of sensitivity to art-historical contexts persists as a result of a pervasive aesthetic–artistic confound; we further specify our claim that basic exposure and the design stance are necessary conditions of artistic understanding; and we explain (...) why many experimental studies do not belong to a psycho-historical science of art. (shrink)
Research seeking a scientific foundation for the theory of art appreciation has raised controversies at the intersection of the social and cognitive sciences. Though equally relevant to a scientific inquiry into art appreciation, psychological and historical approaches to art developed independently and lack a common core of theoretical principles. Historicists argue that psychological and brain sciences ignore the fact that artworks are artifacts produced and appreciated in the context of unique historical situations and artistic intentions. After revealing flaws in the (...) psychological approach, we introduce a psycho-historical framework for the science of art appreciation. This framework demonstrates that a science of art appreciation must investigate how appreciators process causal and historical information to classify and explain their psychological responses to art. Expanding on research about the cognition of artifacts, we identify three modes of appreciation: basic exposure to an artwork, the artistic design stance, and artistic understanding. The artistic design stance, a requisite for artistic understanding, is an attitude whereby appreciators develop their sensitivity to art-historical contexts by means of inquiries into the making, authorship, and functions of artworks. We defend and illustrate the psycho-historical framework with an analysis of existing studies on art appreciation in empirical aesthetics. Finally, we argue that the fluency theory of aesthetic pleasure can be amended to meet the requirements of the framework. We conclude that scientists can tackle fundamental questions about the nature and appreciation of art within the psycho-historical framework. (shrink)
According to intentionalism, phenomenal properties are identical to, supervenient on, or determined by representational properties. Intentionalism faces a special challenge when it comes to accounting for the phenomenal character of moods. First, it seems that no intentionalist treatment of moods can capture their apparently undirected phenomenology. Second, it seems that even if we can come up with a viable intentionalist account of moods, we would not be able to motivate it in some of the same kinds of ways that intentionalism (...) about other kinds of states can be motivated. In this article, I respond to both challenges: First, I propose a novel intentionalist treatment of moods on which they represent unbound affective properties. Then, I argue that this view is indirectly supported by the same kinds of considerations that directly support intentionalism about other mental states. (shrink)
It is timely to assess Bayesian models, but Bayesianism is not a religion. Bayesian modeling is typically used as a tool to explain human data. Bayesian models are sometimes equivalent to other models, but have the advantage of explicitly integrating prior hypotheses with new observations. Any lack of representational or neural assumptions may be an advantage rather than a disadvantage.
In his book Doing Without Concepts, Edouard Machery argues that cognitive scientists should reject the concept of “concept” as a natural, psychological kind. I review and critique several of Machery’s arguments, focusing on his definition of “concept” and on claims against the possibility and utility of a unified account of concepts. In particular, I suggest ways in which prototype, exemplar, and theory-theory approaches to concepts might be integrated.
Although regular polysemy [e.g. producer for product (John read Dickens) or container for contents (John drank the bottle)] has been extensively studied, there has been little work on why certain polysemy patterns are more acceptable than others. We take an empirical approach to the question, in particular evaluating an account based on rules against a gradient account of polysemy that is based on various radical pragmatic theories (Fauconnier 1985; Nunberg 1995). Under the gradient approach, possible senses become more acceptable as (...) they become more closely related to a word’s default meaning, and the apparent regularity of polysemy is an artefact of having many similarly structured concepts. Using methods for measuring conceptual structure drawn from cognitive psychology, Study 1 demonstrates that a variety of metrics along which possible senses can be related to a default meaning, including conceptual centrality, cue validity and similarity, are surprisingly poor predictors of whether shifts to those senses are acceptable. Instead, sense acceptability was better explained by rule-based approaches to polysemy (e.g. Copestake & Briscoe 1995). Study 2 replicated this finding using novel word meanings in which the relatedness of possible senses was varied. However, while individual word senses were better predicted by polysemy rules than conceptual metrics, our data suggested that rules (like producer for product) had themselves arisen to mark senses that, aggregated over many similar words, were particularly closely related. (shrink)
The emerging field of cognition and culture has had some success in explaining the spread of counterintuitive religious concepts around the world. However, researchers have been reluctant to extend its findings to explain the widespread occurrence of culturally counterintuitive ideas in general. This article develops a broader notion of social counterintuitiveness to include ideas that violate shared expectations of a group of people and argues that the notion of social counterintuitiveness is more crucial to explaining cultural success of surprising ideas (...) than the traditional notion of individual counterintuitiveness. Building on the context-based account of individual counterintuitiveness, the article also outlines how the once unorthodox cultural ideas become conventionalized over time only to be swept under the next wave of cultural innovation. By helping us peel away the layers of tradition that weave together the multilayered tapestry of culture, this account can be useful for understanding the development of cultural scaffolding that is needed to support the spread of maximally counterintuitive concepts such as widespread religious concepts of God and ghosts. (shrink)
I raise two issues for Machery's discussion and interpretation of the theory-theory. First, I raise an objection against Machery's claim that theory-theorists take theories to be default bodies of knowledge. Second, I argue that theory-theorists' experimental results do not support Machery's contention that default bodies of knowledge include theories used in their own proprietary kind of categorization process.
We critically review key lines of evidence and theoretical argument relevant to Machery's These include interactions between different kinds of concept representations, unified approaches to explaining contextual effects on concept retrieval, and a critique of empirical dissociations as evidence for concept heterogeneity. We suggest there are good grounds for retaining the concept construct in human cognition.
It has often been suggested that people’s ordinary capacities for understanding the world make use of much the same methods one might find in a formal scientific investigation. A series of recent experimental results offer a challenge to this widely-held view, suggesting that people’s moral judgments can actually influence the intuitions they hold both in folk psychology and in causal cognition. The present target article distinguishes two basic approaches to explaining such effects. One approach would be to say that the (...) relevant competencies are entirely non-moral but that some additional factor (conversational pragmatics, performance error, etc.) then interferes and allows people’s moral judgments to affect their intuitions. Another approach would be to say that moral considerations truly do figure in workings of the competencies themselves. It is argued that the data available now favor the second of these approaches over the first. (shrink)
Machery emphasizes the centrality of explanation for theory-based approaches to concepts. I endorse Machery's emphasis on explanation and consider recent advances in psychology that point to the of explanation, with consequences for Machery's heterogeneity hypothesis about concepts.
Although cognitive scientists have learned a lot about concepts, their findings have yet to be organized in a coherent theoretical framework. In addition, after twenty years of controversy, there is little sign that philosophers and psychologists are converging toward an agreement about the very nature of concepts. Doing without Concepts (Machery 2009) attempts to remedy this state of affairs. In this article, I review the main points and arguments developed at greater length in Doing without Concepts.
Machery (2009) has proposed that the notion of ‘concept’ ought to be eliminated from the theoretical vocabulary of psychology. I raise three questions about his argument: (1) Is there a meaningful distinction between concepts and background knowledge? (2) Do we need to discard the hybrid view? (3) Are there really categories of things in the world that are the basis for concepts? Although I argue that the answer to all three is ‘no’, I agree with Machery's conclusion that seeking a (...) single characterization of concepts will not be fruitful for understanding cognitive representations and processes. (shrink)
Analogical inferences are an important consequence of the way semantic knowledge is represented, that is, with relations as explicit structures that can take arguments. We review evidence that this feature of semantic cognition successfully predicts how quickly and broadly children's concepts change with experience and show that Rogers & McClelland's (R&M's) parallel distributed processing (PDP) model fails to simulate these cognitive changes due to its handling of relational information.
Over the last quarter century, the dominant tendency in comparative cognitive psychology has been to emphasize the similarities between human and nonhuman minds and to downplay the differences as (Darwin 1871). In the present target article, we argue that Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds. To wit, there is a significant discontinuity in the degree to which human and nonhuman animals are able to approximate (...) the higher-order, systematic, relational capabilities of a physical symbol system (PSS) (Newell 1980). We show that this symbolic-relational discontinuity pervades nearly every domain of cognition and runs much deeper than even the spectacular scaffolding provided by language or culture alone can explain. We propose a representational-level specification as to where human and nonhuman animals' abilities to approximate a PSS are similar and where they differ. We conclude by suggesting that recent symbolic-connectionist models of cognition shed new light on the mechanisms that underlie the gap between human and nonhuman minds. (shrink)
In this prcis we focus on phenomena central to the reaction against similarity-based theories that arose in the 1980s and that subsequently motivated the approach to semantic knowledge. Specifically, we consider (1) how concepts differentiate in early development, (2) why some groupings of items seem to form or coherent categories while others do not, (3) why different properties seem central or important to different concepts, (4) why children and adults sometimes attest to beliefs that seem to contradict their direct experience, (...) (5) how concepts reorganize between the ages of 4 and 10, and (6) the relationship between causal knowledge and semantic knowledge. The explanations our theory offers for these phenomena are illustrated with reference to a simple feed-forward connectionist model. The relationships between this simple model, the broader theory, and more general issues in cognitive science are discussed. (shrink)
Structural and functional descriptions of technical artefacts play an important role in engineering practice. A complete description of a technical artefact involves a description of both functional and structural features. Engineers, moreover, assume that there is an intimate relationship between the function and structure of technical artefacts and they reason from functional properties to structural ones and vice versa. This raises the question of how structural and functional descriptions are related. The kind of inference patterns that establish coherence between structural (...) and functional descriptions are explored in this paper, using the analysis of coherence creating relations of Thagard et al. Explanatory, analogical and practical inference patterns are discussed and it is argued that of these three, practical inferences may be the most important. Practical inferences, however, cannot provide a full underpinning of the coherence of structural and functional descriptions of technical artefacts. The paper ends with the suggestion that any account of the coherence of the structural and functional descriptions of technical artefacts must involve reference to their intentional features.Keywords: Technical artefact; Structural description; Functional description; Coherence relation; Practical inference. (shrink)