Responding to recent concerns about the reliability of the published literature in psychology and other disciplines, we formed the X-Phi Replicability Project to estimate the reproducibility of experimental philosophy. Drawing on a representative sample of 40 x-phi studies published between 2003 and 2015, we enlisted 20 research teams across 8 countries to conduct a high-quality replication of each study in order to compare the results to the original published findings. We found that x-phi studies – as represented in our sample (...) – successfully replicated about 70% of the time. We discuss possible reasons for this relatively high replication rate in the field of experimental philosophy and offer suggestions for best research practices going forward. (shrink)
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be (...) explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
Courtesy of its free energy formulation, the hierarchical predictive processing theory of the brain (PTB) is often claimed to be a grand unifying theory. To test this claim, we examine a central case: activity of mesocorticolimbic dopaminergic (DA) systems. After reviewing the three most prominent hypotheses of DA activity—the anhedonia, incentive salience, and reward prediction error hypotheses—we conclude that the evidence currently vindicates explanatory pluralism. This vindication implies that the grand unifying claims of advocates of PTB are unwarranted. More generally, (...) we suggest that the form of scientific progress in the cognitive sciences is unlikely to be a single overarching grand unifying theory. (shrink)
According to a growing trend in theoretical neuroscience, the human perceptual system is akin to a Bayesian machine. The aim of this article is to clearly articulate the claims that perception can be considered Bayesian inference and that the brain can be considered a Bayesian machine, some of the epistemological challenges to these claims; and some of the implications of these claims. We address two questions: (i) How are Bayesian models used in theoretical neuroscience? (ii) From the use of Bayesian (...) models in theoretical neuroscience, have we learned or can we hope to learn that perception is Bayesian inference or that the brain is a Bayesian machine? From actual practice in theoretical neuroscience, we argue for three claims. First, currently Bayesian models do not provide mechanistic explanations; instead they are useful devices for predicting and systematizing observational statements about people's performances in a variety of perceptual tasks. That is, currently we should have an instrumentalist attitude towards Bayesian models in neuroscience. Second, the inference typically drawn from Bayesian behavioural performance in a variety of perceptual tasks to underlying Bayesian mechanisms should be understood within the three-level framework laid out by David Marr ( [1982] ). Third, we can hope to learn that perception is Bayesian inference or that the brain is a Bayesian machine to the extent that Bayesian models will prove successful in yielding secure and informative predictions of both subjects' perceptual performance and features of the underlying neural mechanisms. (shrink)
The free-energy principle claims that biological systems behave adaptively maintaining their physical integrity only if they minimize the free energy of their sensory states. Originally proposed to account for perception, learning, and action, the free-energy principle has been applied to the evolution, development, morphology, and function of the brain, and has been called a “postulate,” a “mandatory principle,” and an “imperative.” While it might afford a theoretical foundation for understanding the complex relationship between physical environment, life, and mind, its epistemic (...) status and scope are unclear. Also unclear is how the free-energy principle relates to prominent theoretical approaches to life science phenomena, such as organicism and mechanicism. This paper clarifies both issues, and identifies limits and prospects for the free-energy principle as a first principle in the life sciences. (shrink)
Some naturalistic philosophers of mind subscribing to the predictive processing theory of mind have adopted a realist attitude towards the results of Bayesian cognitive science. In this paper, we argue that this realist attitude is unwarranted. The Bayesian research program in cognitive science does not possess special epistemic virtues over alternative approaches for explaining mental phenomena involving uncertainty. In particular, the Bayesian approach is not simpler, more unifying, or more rational than alternatives. It is also contentious that the Bayesian approach (...) is overall better supported by the empirical evidence. So, to develop philosophical theories of mind on the basis of a realist interpretation of results from Bayesian cognitive science is unwarranted. Naturalistic philosophers of mind should instead adopt an anti-realist attitude towards these results and remain agnostic as to whether Bayesian models are true. For continuing on with an exclusive focus and praise of Bayes within debates about the predictive processing theory will impede progress in philosophical understanding of scientific practice in computational cognitive science as well as of the architecture of the mind. (shrink)
Modularity is one of the most important concepts used to articulate a theory of cognitive architecture. Over the last 30 years, the debate in many areas of the cognitive sciences and in philosophy of psychology about what modules are, and to what extent our cognitive architecture is modular, has made little progress. After providing a diagnosis of this lack of progress, this article suggests a remedy. It argues that the theoretical framework of network science can be brought to bear on (...) the traditional modularity debate, facilitating our progress in articulating a good theory of the human cognitive architecture. (shrink)
Life-science phenomena are often explained by specifying the mechanisms that bring them about. The new mechanistic philosophers have done much to substantiate this claim and to provide us with a better understanding of what mechanisms are and how they explain. Although there is disagreement among current mechanists on various issues, they share a common core position and a seeming commitment to some form of scientific realism. But is such a commitment necessary? Is it the best way to go about mechanistic (...) explanation? In this article, we propose an alternative antirealist account that also fits explanatory practice in the life sciences. We pay special attention to mechanistic models, i.e. scientific models that involve a mechanism, and to the role of coherence considerations in building such models. To illustrate our points, we consider the mechanism for the action potential. 1 Introduction2 Some Core Features of Mechanistic Explanation3 Scientific Realism and Mechanistic Explanation4 Antirealist Mechanistic Explanation: The Case of the Action Potential5 Some Outstanding Issues for the Antirealist Mechanist6 Two Problems for the Realist Mechanist7 Conclusions. (shrink)
This paper brings together results from the philosophy and the psychology of explanation to argue that there are multiple concepts of explanation in human psychology. Specifically, it is shown that pluralism about explanation coheres with the multiplicity of models of explanation available in the philosophy of science, and it is supported by evidence from the psychology of explanatory judgment. Focusing on the case of a norm of explanatory power, the paper concludes by responding to the worry that if there is (...) a plurality of concepts of explanation, one will not be able to normatively evaluate what counts as good explanation. (shrink)
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates (...) this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence. (shrink)
The rise of Bayesianism in cognitive science promises to shape the debate between nativists and empiricists into more productive forms—or so have claimed several philosophers and cognitive scientists. The present paper explicates this claim, distinguishing different ways of understanding it. After clarifying what is at stake in the controversy between nativists and empiricists, and what is involved in current Bayesian cognitive science, the paper argues that Bayesianism offers not a vindication of either nativism or empiricism, but one way to talk (...) precisely and transparently about the kinds of mechanisms and representations underlying the acquisition of psychological traits without a commitment to an innate language of thought. (shrink)
How should we understand the claim that people comply with social norms because they possess the right kinds of beliefs and preferences? I answer this question by considering two approaches to what it is to believe (and prefer), namely: representationalism and dispositionalism. I argue for a variety of representationalism, viz. neural representationalism. Neural representationalism is the conjunction of two claims. First, what it is essential to have beliefs and preferences is to have certain neural representations. Second, neural representations are often (...) necessary to adequately explain behaviour. After having canvassed one promising way to understand what neural representations could be, I argue that the appeal to beliefs and preferences in explanations of paradigmatic cases of norm compliance should be understood as an appeal to neural representations. (shrink)
Colombo’s (Phenomenology and the Cognitive Sciences, 2013) plea for neural representationalism is the focus of a recent contribution to Phenomenology and Cognitive Science by Daniel D. Hutto and Erik Myin. In that paper, Hutto and Myin have tried to show that my arguments fail badly. Here, I want to respond to their critique clarifying the type of neural representationalism put forward in my (Phenomenology and the Cognitive Sciences, 2013) piece, and to take the opportunity to make a few remarks of (...) general interest concerning what Hutto and Myin have dubbed “the Hard Problem of Content.”. (shrink)
According to the reward-prediction error hypothesis of dopamine, the phasic activity of dopaminergic neurons in the midbrain signals a discrepancy between the predicted and currently experienced reward of a particular event. It can be claimed that this hypothesis is deep, elegant and beautiful, representing one of the largest successes of computational neuroscience. This paper examines this claim, making two contributions to existing literature. First, it draws a comprehensive historical account of the main steps that led to the formulation and subsequent (...) success of the RPEH. Second, in light of this historical account, it explains in which sense the RPEH is explanatory and under which conditions it can be justifiably deemed deeper than the incentive salience hypothesis of dopamine, which is arguably the most prominent contemporary alternative to the RPEH. (shrink)
A popular view in philosophy of science contends that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses is not influenced by moral, political, economic, or social values, but only by the available evidence. A large body of results in the psychology of motivated-reasoning has put pressure on the empirical adequacy of this view. The present study extends this body of results by providing direct evidence that the moral offensiveness of a scientific hypothesis biases explanatory judgment (...) along several dimensions, even when prior credence in the hypothesis is controlled for. Furthermore, it is shown that this bias is insensitive to an economic incentive to be accurate in the evaluation of the evidence. These results contribute to call into question the attainability of the ideal of a value-free science. (shrink)
Can facts about subpersonal states and events be constitutively relevant to personal-level phenomena? And can knowledge of these facts inform explanations of personal-level phenomena? Some philosophers, like Jennifer Hornsby and John McDowell, argue for two negative answers whereby questions about persons and their behavior cannot be answered by using information from subpersonal psychology. Knowledge of subpersonal states and events cannot inform personal-level explanation such that they cast light on what constitutes persons? behaviors. In this paper I argue against this position. (...) After having distinguished between enabling and constitutive relevance, I defend the claim that at least some facts about subpersonal states and events are constitutively relevant to some personal-level phenomenon, and therefore can, and sometimes should, inform personal-level explanations. I draw some of the possible consequences of my claim for our understanding of personal-level behavior by focusing on the phenomenon of addiction. (shrink)
A widely shared view in the cognitive sciences is that discovering and assessing explanations of cognitive phenomena whose production involves uncertainty should be done in a Bayesian framework. One assumption supporting this modelling choice is that Bayes provides the best approach for representing uncertainty. However, it is unclear that Bayes possesses special epistemic virtues over alternative modelling frameworks, since a systematic comparison has yet to be attempted. Currently, it is then premature to assert that cognitive phenomena involving uncertainty are best (...) explained within the Bayesian framework. As a forewarning, progress in cognitive science may be hindered if too many scientists continue to focus their efforts on Bayesian modelling, which risks to monopolize scientific resources that may be better allocated to alternative approaches. (shrink)
The rise of Bayesianism in cognitive science promises to shape the debate between nativists and empiricists into more productive forms—or so have claimed several philosophers and cognitive scientists. The present paper explicates this claim, distinguishing different ways of understanding it. After clarifying what is at stake in the controversy between nativists and empiricists, and what is involved in current Bayesian cognitive science, the paper argues that Bayesianism offers not a vindication of either nativism or empiricism, but one way to talk (...) precisely and transparently about the kinds of mechanisms and representations underlying the acquisition of psychological traits without a commitment to an innate language of thought. (shrink)
Current explanatory frameworks for social norms pay little attention to why and how brains might carry out computational functions that generate norm compliance behavior. This paper expands on existing literature by laying out the beginnings of a neurocomputational framework for social norms and social cognition, which can be the basis for advancing our understanding of the nature and mechanisms of social norms. Two neurocomputational building blocks are identified that might constitute the core of the mechanism of norm compliance. They consist (...) of Bayesian and reinforcement learning systems. It is sketched why and how the concerted activity of these systems can generate norm compliance by minimization of three specific kinds of prediction-errors. (shrink)
There is widespread recognition at universities that a proper understanding of science is needed for all undergraduates. Good jobs are increasingly found in fields related to Science, Technology, Engineering, and Medicine, and science now enters almost all aspects of our daily lives. For these reasons, scientific literacy and an understanding of scientific methodology are a foundational part of any undergraduate education. Recipes for Science provides an accessible introduction to the main concepts and methods of scientific reasoning. With the help of (...) an array of contemporary and historical examples, definitions, visual aids, and exercises for active learning, the textbook helps to increase students’ scientific literacy. The first part of the book covers the definitive features of science: naturalism, experimentation, modeling, and the merits and shortcomings of both activities. The second part covers the main forms of inference in science: deductive, inductive, abductive, probabilistic, statistical, and causal. The book concludes with a discussion of explanation, theorizing and theory-change, and the relationship between science and society. The textbook is designed to be adaptable to a wide variety of different kinds of courses. In any of these different uses, the book helps students better navigate our scientific, 21st-century world, and it lays the foundation for more advanced undergraduate coursework in a wide variety of liberal arts and science courses. Selling Points Helps students develop scientific literacy—an essential aspect of _any_ undergraduate education in the 21 st century, including a broad understanding of scientific reasoning, methods, and concepts Written for all beginning college students: preparing science majors for more focused work in particular science; introducing the humanities’ investigations of science; and helping non-science majors become more sophisticated consumers of scientific information Provides an abundance of both contemporary and historical examples Covers reasoning strategies and norms applicable in all fields of physical, life, and social sciences, _as well as_ strategies and norms distinctive of specific sciences Includes visual aids to clarify and illustrate ideas Provides text boxes with related topics and helpful definitions of key terms, and includes a final Glossary with all key terms Includes Exercises for Active Learning at the end of each chapter, which will ensure full student engagement and mastery of the information include earlier in the chapter Provides annotated ‘For Further Reading’ sections at the end of each chapter, guiding students to the best primary and secondary sources available Offers a Companion Website, with: For Students: direct links to many of the primary sources discussed in the text, student self-check assessments, a bank of exam questions, and ideas for extended out-of-class projects For Instructors: a password-protected Teacher’s Manual, which provides student exam questions with answers, extensive lecture notes, classroom-ready Power Point presentations, and sample syllabi Extensive Curricular Development materials, helping any instructor who needs to create a Scientific Reasoning Course, ex nihilo. (shrink)
There continues to be significant confusion about the goals, scope, and nature of modelling practice in neuroeconomics. This article aims to dispel some such confusion by using one of the most recent critiques of neuroeconomic modelling as a foil. The article argues for two claims. First, currently, for at least some economic model of choice behaviour, the benefits derivable from neurally informing an economic model do not involve special tractability costs. Second, modelling in neuroeconomics is best understood within Marr’s three-level (...) of analysis framework and in light of a co-evolutionary research ideology. The first claim is established by elucidating the relationship between the tractability of a model, its descriptive accuracy, and its number of variables. The second claim relies on an explanation of what it can take to neurally inform an economic model of choice behaviour. 1 Introduction2 Neurally Informed Models of Choice: A Case Study2.1 A case study on risk-sensitive choice2.1.1 Target and modelling framework2.1.2 Research question and hypotheses2.1.3 Competitive models of risk-sensitive behaviour2.1.4 Model-based fMRI: from economics to brains and back2.2 Neurally informed modelling3 Tractability: When Does Size Matter?4 Neural Integration and the Co-evolutionary Research Ideology5 Conclusion. (shrink)
Mechanist philosophers have examined several strategies scientists use for discovering causal mechanisms in neuroscience. Findings about the anatomical organization of the brain play a central role in several such strategies. Little attention has been paid, however, to the use of network analysis and causal modeling techniques for mechanism discovery. In particular, mechanist philosophers have not explored whether and how these strategies incorporate information about the anatomical organization of the brain. This paper clarifies these issues in the light of the distinction (...) between structural, functional and effective connectivity. Specifically, we examine two quantitative strategies currently used for causal discovery from functional neuroimaging data: dynamic causal modeling and probabilistic graphical modeling. We show that dynamic causal modeling uses findings about the brain’s anatomical organization to improve the statistical estimation of parameters in an already specified causal model of the target brain mechanism. Probabilistic graphical modeling, in contrast, makes no appeal to the brain’s anatomical organization, but lays bare the conditions under which correlational data suffice to license reliable inferences about the causal organization of a target brain mechanism. The question of whether findings about the anatomical organization of the brain can and should constrain the inference of causal networks remains open, but we show how the tools supplied by graphical modeling methods help to address it. (shrink)
The question of how judgments of explanatory value inform probabilistic inference is well studied within psychology and philosophy. Less studied are the questions: How does probabilistic information affect judgments of explanatory value? Does probabilistic information take precedence over causal information in determining explanatory judgments? To answer these questions, we conducted two experimental studies. In Study 1, we found that probabilistic information had a negligible impact on explanatory judgments of event-types with a potentially unlimited number of available, alternative explanations; causal credibility (...) was the main determinant of explanatory value. In Study 2, we found that, for event-token explanations with a definite set of candidate alternatives, probabilistic information strongly affected judgments of explanatory value. In the light of these findings, we reassess under which circumstances explanatory inference is probabilistically sound. (shrink)
Despite the impressive amount of financial resources invested in carrying out large-scale brain simulations, it is controversial what the payoffs are of pursuing this project. The present paper argues that in some cases, from designing, building, and running a large-scale neural simulation, scientists acquire useful knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. What this means, why it is not a trivial lesson, and how it advances the literature on (...) the epistemology of computer simulation are the three preoccupations addressed by the paper. (shrink)
This is the only book that examines the theory and data on the development of implicit and explicit memory. It first describes the characteristics of implicit and explicit memory (including conscious recollection) and tasks used with adults to measure them. Next, it reviews the brain mechanisms thought to underlie implicit and explicit memory and the studies with amnesics that initially prompted the search for different neuroanatomically-based memory systems. Two chapters review the Jacksonian (first in, last out) principle and empirical evidence (...) for the hierarchical appearance and dissolution of two memory systems in animal models (rats, nonhuman primates), children, and normal/amnesic adults. Two chapters examine memory tasks used with human infants and evidence of implicit and explicit memory during early infancy. Three final chapters consider structural and processing accounts of adult memory dissociations, their applicability to infant memory dissociations, and implications of infant data for current concepts of implicit and explicit memory. (shrink)
How does other people’s opinion affect judgments of norm transgressions? In our study, we used a modification of the famous Asch paradigm to examine conformity in the moral domain. The question we addressed was how peer group opinion alters normative judgments of scenarios involving violations of moral, social, and decency norms. The results indicate that even moral norms are subject to conformity, especially in situations with a high degree of social presence. Interestingly, the degree of conformity can distinguish between different (...) types of norms. (shrink)
According to the reward-prediction error hypothesis (RPEH) of dopamine, the phasic activity of dopaminergic neurons in the midbrain signals a discrepancy between the predicted and currently experienced reward of a particular event. It can be claimed that this hypothesis is deep, elegant and beautiful, representing one of the largest successes of computational neuroscience. This paper examines this claim, making two contributions to existing literature. First, it draws a comprehensive historical account of the main steps that led to the formulation and (...) subsequent success of the RPEH. Second, in light of this historical account, it explains in which sense the RPEH is explanatory and under which conditions it can be justifiably deemed deeper than the incentive salience hypothesis of dopamine, which is arguably the most prominent contemporary alternative to the RPEH. (shrink)
Abductive reasoning assigns special status to the explanatory power of a hypothesis. But how do people make explanatory judgments? Our study clarifies this issue by asking: How does the explanatory power of a hypothesis cohere with other cognitive factors? How does probabilistic information affect explanatory judgments? In order to answer these questions, we conducted an experiment with 671 participants. Their task was to make judgments about a potentially explanatory hypothesis and its cognitive virtues. In the responses, we isolated three constructs: (...) Explanatory Value, Rational Acceptability, and Entailment. Explanatory judgments strongly cohered with judgments of causal relevance and with a sense of understanding. Furthermore, we found that Explanatory Value was sensitive to manipulations of statistical relevance relations between hypothesis and evidence, but not to explicit information about the prior probability of the hypothesis. These results indicate that probabilistic information about statistical relevance is a strong determinant of Explanatory Value. More generally, our study suggests that abductive and probabilistic reasoning are two distinct modes of inference. (shrink)
In a recent Analysis piece, John Shand (2014) argues that the Predictive Theory of Mind provides a unique explanation for why one cannot play chess against oneself. On the basis of this purported explanatory power, Shand concludes that we have an extra reason to believe that PTM is correct. In this reply, we first rectify the claim that one cannot play chess against oneself; then we move on to argue that even if this were the case, Shand’s argument does not (...) give extra weight to the Predictive Theory of Mind. (shrink)
According to John Haugeland, the capacity for “authentic intentionality” depends on a commitment to constitutive standards of objectivity. One of the consequences of Haugeland’s view is that a neurocomputational explanation cannot be adequate to understand “authentic intentionality”. This paper gives grounds to resist such a consequence. It provides the beginning of an account of authentic intentionality in terms of neurocomputational enabling conditions. It argues that the standards, which constitute the domain of objects that can be represented, reflect the statistical structure (...) of the environments where brain sensory systems evolved and develop. The objection that I equivocate on what Haugeland means by “commitment to standards” is rebutted by introducing the notion of “florid, self-conscious representing”. Were the hypothesis presented plausible, computational neuroscience would offer a promising framework for a better understanding of the conditions for meaningful representation. (shrink)
Aboitiz et al. suggest that the mammalian isocortex is derived from the dorsal cortex of reptiles and birds, and that there has been a major divergence in the connectivity patterns (and hence function) of the mammalian and reptilian/avian hippocampus. There is considerable evidence to suggest, however, that the avian hippocampus serves the exact same function as the mammalian hippocampus.
Bryce Huebner’s Macrocognition is a book with a double mission. The first and main mission is “to show that there are cases of collective mentality in our world” . Cases of collective mentality are cases where groups, teams, mobs, firms, colonies or some other collectivities possess cognitive capacities or mental states in the same sense that we individually do. To accomplish this mission, Huebner develops an account of macrocognition, where “the term ‘macrocognition’ is intended as shorthand for the claim that (...) system-level cognition is implemented by an integrated network of specialized computational mechanisms” .The second mission of Huebner’s book is to elaborate an account of cognitive architecture that could set the groundwork for identifying under what conditions groups, and individuals indeed, are fruitfully and justifiably said to be minded. To this end, Huebner tackles several foundational issues in cognitive science, including traditional philosophical questions a .. (shrink)
The subject matter of this thesis can be summarized by a triplet of questions and answers. Showing what these questions and answers mean is, in essence, the goal of my project. The triplet goes like this: Q: How can we make progress in our understanding of social norms and norm compliance? A: Adopting a neurocomputational framework is one effective way to make progress in our understanding of social norms and norm compliance. Q: What could the neurocomputational mechanism of social norm (...) compliance be? A: The mechanism of norm compliance probably consists of Bayesian - Reinforcement Learning algorithms implemented by activity in certain neural populations. Q: What could information about this mechanism tell us about social norms and social norm compliance? A: Information about this mechanism tells us that: a1: Social norms are uncertainty-minimizing devices. a2: Social norm compliance is one trick that agents employ to interact coadaptively and smoothly in their social environment. Most of the existing treatments of norms and norm compliance consist in what Cristina Bicchieri refers to as “rational reconstructions.” A rational reconstruction of the concept of social norm “specifies in which sense one may say that norms are rational, or compliance with a norm is rational”. What sets my project apart from these types of treatments is that it aims, first and foremost, at providing a description of some core aspects of the mechanism of norm compliance. The single most original idea put forth in my project is to bring an alternative explanatory framework to bear on social norm compliance. This is the framework of computational cognitive neuroscience. The chapters of this thesis describe some ways in which central issues concerning social norms can be fruitfully addressed within a neurocomputational framework. In order to qualify and articulate the triplet above, my strategy consists firstly in laying down the beginnings of a model of the mechanism of norm compliance behaviour, and then zooming in on specific aspects of the model. Such a model, the chapters of this thesis argue, explains apparently important features of the psychology and neuroscience of norm compliance, and helps us to understand the nature of the social norms we live by. (shrink)
Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature in the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by the prior credibility of a potential explanation, the causal framing used to describe the explanation, the generalizability of the explanation, and its statistical relevance for the evidence. Collectively, (...) the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations were not susceptible to causal framing effects. Instead, highly credible hypotheses were sensitive to the effects of factors which are usually considered relevant from a normative point of view: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they illuminate the close relationship between prior beliefs and explanatory power. Third, they clarify the relationship between abductive and probabilistic reasoning. (shrink)
The relation between probabilistic and explanatory reasoning is a classical topic in philosophy of science. Most philosophical analyses are concerned with the compatibility of Inference to the Best Explanation with probabilistic, Bayesian inference, and the impact of explanatory considerations on the assignment of subjective probabilities. This paper reverses the question and asks how causal and explanatory considerations are affected by probabilistic information. We investigate how probabilistic information determines the explanatory value of a hypothesis, and in which sense folk explanatory practice (...) can be said to be rational. Our study identifies three main factors in reasoning about a explanatory hypothesis: cognitive salience, rational acceptability and logical entailment. This corresponds well to the variety of philosophical accounts of explanation. Moreover, we show that these factors are highly sensitive to manipulations of probabilistic information. This finding suggests that probabilistic reasoning is a crucial part of explanatory inferences, and it motivates new avenues of research in the debate about Inference to the Best Explanation and probabilistic measures of explanatory power. (shrink)
Intellectual humility has attracted attention in both philosophy and psychology. Philosophers have clarified the nature of intellectual humility as an epistemic virtue; and psychologists have developed scales for measuring people’s intellectual humility. Much less attention has been paid to the potential effects of intellectual humility on people’s negative attitudes and to its relationship with prejudice-based epistemic vices. Here we fill these gaps by focusing on the relationship between intellectual humility and prejudice. To clarify this relationship, we conducted four empirical studies. (...) The results of these studies show three things. First, people are systematically prejudiced towards members of groups perceived as dissimilar. Second, intellectual humility weakens the association between perceived dissimilarity and prejudice. Third, more intellectual humility is associated with more prejudice overall. We show that this apparently paradoxical pattern of results is consistent with the idea that it is both psychologically and rationally plausible that one person is at the same time intellectually humble, epistemically virtuous and strongly prejudiced. (shrink)