We present experimental evidence that people's modes of social interaction influence their construal of truth. Participants who engaged in cooperative interactions were less inclined to agree that there was an objective truth about that topic than were those who engaged in a competitive interaction. Follow-up experiments ruled out alternative explanations and indicated that the changes in objectivity are explained by argumentative mindsets: When people are in cooperative arguments, they see the truth as more subjective. These findings can help inform research (...) on moral objectivism and, more broadly, on the distinctive cognitive consequences of different types of social interaction. (shrink)
Three studies provided evidence that syntax influences intentionality judgments. In Experiment 1, participants made either speeded or unspeeded intentionality judgments about ambiguously intentional subjects or objects. Participants were more likely to judge grammatical subjects as acting intentionally in the speeded relative to the reflective condition (thus showing an intentionality bias), but grammatical objects revealed the opposite pattern of results (thus showing an unintentionality bias). In Experiment 2, participants made an intentionality judgment about one of the two actors in a partially (...) symmetric sentence (e.g., “John exchanged products with Susan”). The results revealed a tendency to treat the grammatical subject as acting more intentionally than the grammatical object. In Experiment 3 participants were encouraged to think about the events that such sentences typically refer to, and the tendency was significantly reduced. These results suggest a privileged relationship between language and central theory-of-mind concepts. More specifically, there may be two ways of determining intentionality judgments: (1) an automatic verbal bias to treat grammatical subjects (but not objects) as intentional (2) a deeper, more careful consideration of the events typically described by a sentence. (shrink)
If folk science means individuals having well worked out mechanistic theories of the workings of the world, then it is not feasible. Laypeople’s explanatory understandings are remarkably coarse, full of gaps, and often full of inconsistencies. Even worse, most people overestimate their own understandings. Yet recent views suggest that formal scientists may not be so different. In spite of these limitations, science somehow works and its success offers hope for the feasibility of folk science as well. The success of science (...) arises from the ways in which scientists learn to leverage understandings in other minds and to outsource explanatory work through sophisticated methods of deference and simplification of complex systems. Three studies ask whether analogous processes might be present not only in laypeople but also in young children and thereby form a foundation for supplementing explanatory understandings almost from the start of our first attempts to make sense of the world. (shrink)
The rise of appeals to intuitive theories in many areas of cognitive science must cope with a powerful fact. People understand the workings of the world around them in far less detail than they think. This illusion of knowledge depth has been uncovered in a series of recent studies and is caused by several distinctive properties of explanatory understanding not found in other forms of knowledge. Other experimental work has shown that people do have skeletal frameworks of expectations that constrain (...) richer ad hoc theory construction on the fly. These frameworks are supplemented by an ability to evaluate and rely on the division of cognitive labour in one's culture, an ability shown to be present even in young children. (shrink)
Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 and 2 demonstrate that children and adults show a highly consistent MM effect, and that it is stronger in young children. Study 3 (...) demonstrates that adults are explicitly aware of the availability of outside knowledge, and that this awareness may be related to the strength of the MM effect. Study 4 rules out general overconfidence effects by examining a metalinguistic task in which adults are well calibrated. (shrink)
What would it be like to have never learned English, but instead only to know Hopi, Mandarin Chinese, or American Sign Language? Would that change the way you think? Imagine entirely losing your language, as the result of stroke or trauma. You are aphasic, unable to speak or listen, read or write. What would your thoughts now be like? As the most extreme case, imagine having been raised without any language at all, as a wild child. What—if anything—would it be (...) like to be such a person? Could you be smart; could you reminisce about the past, plan the future? (shrink)
The present studies investigated children’s and adults’ intuitive beliefs about the physical nature of essences. Adults and children (ranging in age from 6 to 10 years old) were asked to reason about two different ways of determining an unknown object’s category: taking a tiny internal sample from any part of the object (distributed view of essence), or taking a sample from one specific region (localized view of essence). Results from three studies indicated that adults strongly endorsed the distributed view, and (...) children showed a developmental shift from a localized to distributed view with increasing age. These results suggest that even children go beyond mere placeholder notions of essence, committing to conceptual frameworks of how essences might be physically instantiated. (shrink)
The ability to learn the direction of causal relations is critical for understanding and acting in the world. We investigated how children learn causal directionality in situations in which the states of variables are temporally dependent (i.e., autocorrelated). In Experiment 1, children learned about causal direction by comparing the states of one variable before versus after an intervention on another variable. In Experiment 2, children reliably inferred causal directionality merely from observing how two variables change over time; they interpreted Y (...) changing without a change in X as evidence that Y does not influence X. Both of these strategies make sense if one believes the variables to be temporally dependent. We discuss the implications of these results for interpreting previous findings. More broadly, given that many real-world environments are characterized by temporal dependency, these results suggest strategies that children may use to learn the causal structure of their environments. (shrink)
Machery rightly points out a diverse set of phenomena associated with concepts that create challenges for many traditional views of their nature. It may be premature, however, to give up such views completely. Here I defend the possibility of hybrid models of concept structure.
We introduce two notions–the shadows and the shallows of explanation–in opening up explanation to broader, interdisciplinary investigation. The shadows of explanation refer to past philosophical efforts to provide either a conceptual analysis of explanation or in some other way to pinpoint the essence of explanation. The shallows of explanation refer to the phenomenon of having surprisingly limited everyday, individual cognitive abilities when it comes to explanation. Explanations are ubiquitous, but they typically are not accompanied by the depth that we might, (...) prima facie, expect. We explain the existence of the shadows and shallows of explanation in terms of there being a theoretical abyss between explanation and richer, theoretical structures that are often attributed to people. We offer an account of the shallows, in particular, both in terms of shorn-down, internal, mental machinery, and in terms of an enriched, public symbolic environment, relative to the currently dominant ways of thinking about cognition and the world. (shrink)
Philip E. Tetlock's finding that "hedgehog" experts are worse predictors than "foxes" offers fertile ground for future research. Are experts as likely to exhibit hedgehog- or fox-like tendencies in areas that call for explanatory, diagnostic, and skill-based expertise-as they did when Tetlock called on experts to make predictions? Do particular domains of expertise curtail or encourage different styles of expertise? Can we trace these different styles to childhood? Finally, can we nudge hedgehogs to be more like foxes? Current research can (...) only grope at the answers to these questions, but they are essential to gauging the health of expert political judgment. (shrink)
Rogers & McClelland's (R&M's) précis represents an important effort to address key issues in concepts and categorization, but few of the simulations deliver what is promised. We argue that the models are seriously underconstrained, importantly incomplete, and psychologically implausible; more broadly, R&M dwell too heavily on the apparent successes without comparable concern for limitations already noted in the literature.
Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose (...) input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as algorithmic computation, and the second as information processing computation. (shrink)
The more carefully we look, the more impressive the repertoire of infant concepts seems to be. Across a wide range of tasks, infants seem to be using concepts corresponding to surprisingly high-level and abstract categories and relations. It is tempting to try to explain these abilities in terms of a core capacity in spatial cognition that emerges very early in development and then gets extended beyond reasoning about direct spatial arrays and events. Although such a spatial cognitive capacity may indeed (...) form one valuable basis for later cognitive growth, it seems unlikely that it can be the sole or even primary explanation for either the impressive conceptual capacities of infants or the ways in which concepts develop. (shrink)
The assumption of domain specificity has been invaluable to the study of the emergence of biological thought in young children. Yet, domains of thought must be understood within a broader context that explains how those domains relate to the surrounding cultures, to different kinds of cognitive constraints, to framing effects, to abilities to evaluate knowledge and to the ways in which domain-specific knowledge in any individual mind is related to knowledge in other minds. All of these issues must come together (...) to have a full account of conceptual development in biology. (shrink)
Does expertise within a domain of knowledge predict accurate self-assessment of the ability to explain topics in that domain? We find that expertise increases confidence in the ability to explain a wide variety of phenomena. However, this confidence is unwarranted; after actually offering full explanations, people are surprised by the limitations in their understanding. For passive expertise, miscalibration is moderated by education; those with more education are accurate in their self-assessments. But when those with more education consider topics related to (...) their area of concentrated study, they also display an illusion of understanding. This “curse of expertise” is explained by a failure to recognize the amount of detailed information that had been forgotten. While expertise can sometimes lead to accurate self-knowledge, it can also create illusions of competence. (shrink)
The article examines the question of how learning multiple tasks interacts with neural architectures and the flow of information through those architectures. It approaches the question by using the idealization of an artificial neural network where it is possible to ask more precise questions about the effects of modular versus nonmodular architectures as well as the effects of sequential versus simultaneous learning of tasks. A prior work has demonstrated a clear advantage of modular architectures when the two tasks must be (...) learned at the same time from the start, but this advantage may disappear when one task is first learned to a criterion before the second task is undertaken. Indeed, in some cases of sequential learning, nonmodular networks achieve success levels comparable to those of modular networks. In particular, if a nonmodular network is to learn two tasks of different difficulty and the more difficult task is presented first and learned to a criterion, then the network will learn the second, easier one without permanent degradation of the first one. In contrast, if the easier task is learned first, a nonmodular task may perform significantly less well than a modular one. It seems that the reason for this difference has to do with the fact that the sequential presentation of the more difficult task first minimizes interference between the two tasks. More broadly, the studies summarized in this article seem to imply that no single learning architecture is optimal for all situations. (shrink)
Although recent work has emphasised the importance of naïve theories to categorisation, there has been little work examining the grain of analysis at which causal information normally influences categorisation. That level of analysis may often go unappreciated because of an “illusion of explanatory depth”, in which people think they mentally represent causal explanatory relations in far more detail than they really do. Naïve theories therefore might seem to be irrelevant to categorisation, or perhaps they only involve noting the presence of (...) unknown essences. I argue instead that adults and children alike effectively track high-level causal patterns, often outside awareness, and that this ability is essential to categorisation. Three examples of such pattern-tracking are described. The shallowness of our explanatory understandings may be further supported by a reliance on the division of cognitive labour that occurs in all cultures, a reliance that arises from well-developed abilities to cluster knowledge in the minds of others. (shrink)
In this book, Carey gives cognitive science a detailed account of the origins of concepts and an explanation of how origins stories are essential to understanding what concepts are and how we use them. At the same time, this book's details help highlight the challenge of explaining how conceptual change works with real-world concepts that often have heavily degraded internal content.
We investigated how people design interventions to affect the outcomes of causal systems. We propose that the abstract structural properties of a causal system, in addition to people's content and mechanism knowledge, influence decisions about how to intervene. In Experiment 1, participants preferred to intervene at specific locations in a causal chain regardless of which content variables occupied those positions. In Experiment 2, participants were more likely to intervene on root causes versus immediate causes when they were presented with a (...) long-term goal versus a short-term goal. These results show that the structural properties of a causal system can guide the design of interventions. (shrink)
H actually ran the program on a number of large pieces of English text, though from my point of view, it’s the ability and the willingness to do this that is the motivation of learning Perl. H’s Perl code takes all periods ‘.’ to mark sentence breaks, and of course not all periods really do mark sentence breaks: the previous one earlier in this sentence does not, nor does the period after an abbreviation, most of the time—though the next one (...) does, e.g. The task of writing a program that can distinguish sentence-final periods from all other periods is quite an interesting and challenging one. (shrink)