The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number (...) of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements. (shrink)
Probabilistic models have much to offer to philosophy. We continually receive information from a variety of sources: from our senses, from witnesses, from scientific instruments. When considering whether we should believe this information, we assess whether the sources are independent, how reliable they are, and how plausible and coherent the information is. Bovens and Hartmann provide a systematic Bayesian account of these features of reasoning. Simple Bayesian Networks allow us to model alternative assumptions about the nature of the (...) information sources. Measurement of the coherence of information is a controversial matter: arguably, the more coherent a set of information is, the more confident we may be that its content is true, other things being equal. The authors offer a new treatment of coherence which respects this claim and shows its relevance to scientific theory choice. Bovens and Hartmann apply this methodology to a wide range of much discussed issues regarding evidence, testimony, scientific theories, and voting. Bayesian Epistemology is an essential tool for anyone working on probabilistic methods in philosophy, and has broad implications for many other disciplines. (shrink)
If a group is modelled as a single Bayesian agent, what should its beliefs be? I propose an axiomatic model that connects group beliefs to beliefs of group members, who are themselves modelled as Bayesian agents, possibly with different priors and different information. Group beliefs are proven to take a simple multiplicative form if people’s information is independent, and a more complex form if information overlaps arbitrarily. This shows that group beliefs can incorporate all information spread over the (...) individuals without the individuals having to communicate their (possibly complex and hard-to-describe) private information; communicating prior and posterior beliefs sufices. JEL classification: D70, D71.. (shrink)
Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic (...) in character and are often deployed unsystematically, Bayesian reverse-engineering avoids several important worries that have been raised about the explanatory credentials of Bayesian cognitive science: the worry that the lower levels of analysis are being ignored altogether; the challenge that the mathematical models being developed are unfalsifiable; and the charge that the terms ‘optimal’ and ‘rational’ have lost their customary normative force. But while Bayesian reverse-engineering is therefore a viable and productive research strategy, it is also no fool-proof recipe for explanatory success. (shrink)
I develop a conception of expressivism according to which it is chiefly a pragmatic thesis about some fragment of discourse, one imposing certain constraints on semantics. The first half of the paper uses credal expressivism about the language of probability as a stalking-horse for this purpose. The second half turns to the question of how one might frame an analogous form of expressivism about the language of deontic modality. Here I offer a preliminary comparison of two expressivist lines. The first, (...) expectation expressivism, looks again to Bayesian modelling for inspiration: it glosses deontically modal language as characteristically serving to express decision-theoretic expectation (expected utility). The second, plan expressivism, develops the idea (due to Gibbard 2003) that this language serves to express 'plan-laden' states of belief. In the process of comparing the views, I show how to incorporate Gibbard's modelling ideas into a compositional semantics for attitudes and modals, filling a lacuna in the account. I close with the question whether and how plan expressivism might be developed with expectation-like structure. (shrink)
Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
Jan Sprenger and Stephan Hartmann offer a fresh approach to central topics in philosophy of science, including causation, explanation, evidence, and scientific models. Their Bayesian approach uses the concept of degrees of belief to explain and to elucidate manifold aspects of scientific reasoning.
Scientific reasoning is—and ought to be—conducted in accordance with the axioms of probability. This Bayesian view—so called because of the central role it accords to a theorem first proved by Thomas Bayes in the late eighteenth ...
Bayesian nets are widely used in artificial intelligence as a calculus for causal reasoning, enabling machines to make predictions, perform diagnoses, take decisions and even to discover causal relationships. This book, aimed at researchers and graduate students in computer science, mathematics and philosophy, brings together two important research topics: how to automate reasoning in artificial intelligence, and the nature of causality and probability in philosophy.
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill (...) this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines (...) the ability to construct Bayesian network models using directed acyclic graphs (DAGs), the Markov chain Monte Carlo (MCMC) simulation technique, and the graphic capability of the ggplot2 package. As a result, it can improve the user experience and intuitive understanding when constructing and analyzing Bayesian network models. A case example is offered to illustrate the usefulness of the package for Big Data analytics and cognitive computing. (shrink)
Bayesian epistemology addresses epistemological problems with the help of the mathematical theory of probability. It turns out that the probability calculus is especially suited to represent degrees of belief (credences) and to deal with questions of belief change, confirmation, evidence, justification, and coherence. Compared to the informal discussions in traditional epistemology, Bayesian epis- temology allows for a more precise and fine-grained analysis which takes the gradual aspects of these central epistemological notions into account. Bayesian epistemology therefore complements (...) traditional epistemology; it does not re- place it or aim at replacing it. (shrink)
Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension (...) of logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting. (shrink)
Bayesian Epistemology is a general framework for thinking about agents who have beliefs that come in degrees. Theories in this framework give accounts of rational belief and rational belief change, which share two key features: (i) rational belief states are represented with probability functions, and (ii) rational belief change results from the acquisition of evidence. This dissertation focuses specifically on the second feature. I pose the Evidence Question: What is it to have evidence? Before addressing this question we must (...) have an understanding of Bayesian Epistemology. The first chapter argues that we should understand Bayesian Epistemology as giving us theories that are evaluative and not action-guiding. I reach this verdict after considering the popular ‘ought’-implies-‘can’ objection to Bayesian Epistemology. The second chapter argues that it is important for theories in Bayesian Epistemology to answer the Evidence Question, and distinguishes between internalist and externalist answers. The third and fourth chapters present and defend a specific answer to the Evidence Question. The account is inspired by reliabilist accounts of justification, and attempts to understand what it is to have evidence by appealing solely to considerations of reliability. Chapter 3 explains how to understand reliability, and how the account fits with Bayesian Epistemology, in particular, the requirement that an agent’s evidence receive probability 1. Chapter 4 responds to objections, which maintain that the account gives the wrong verdict in a variety of situations including skeptical scenarios, lottery cases, scientific cases, and cases involving inference. After slight modifications, I argue that my account has the resources to answer the objections. The fifth chapter considers the possibility of losing evidence. I show how my account can model these cases. To do so, however, we require a modification to Conditionalization, the orthodox principle governing belief change. I present such a modification. The sixth and seventh chapters propose a new understanding of Dutch Book Arguments, historically important arguments for Bayesian principles. The proposal shows that the Dutch Book Arguments for implausible principles are defective, while the ones for plausible principles are not. The final chapter is a conclusion. (shrink)
‘Bayesian epistemology’ became an epistemological movement in the 20th century, though its two main features can be traced back to the eponymous Reverend Thomas Bayes (c. 1701-61). Those two features are: (1) the introduction of a formal apparatus for inductive logic; (2) the introduction of a pragmatic self-defeat test (as illustrated by Dutch Book Arguments) for epistemic rationality as a way of extending the justification of the laws of deductive logic to include a justification for the laws of inductive (...) logic. The formal apparatus itself has two main elements: the use of the laws of probability as coherence constraints on rational degrees of belief (or degrees of confidence) and the introduction of a rule of probabilistic inference, a rule or principle of conditionalization. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
A Bayesian account of the virtue of unification is given. On this account, the ability of a theory to unify disparate phenomena consists in the ability of the theory to render such phenomena informationally relevant to each other. It is shown that such ability contributes to the evidential support of the theory, and hence that preference for theories that unify the phenomena need not, on a Bayesian account, be built into the prior probabilities of theories.
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon (...) to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
Sensorimotor psychology studies the mental processes that control goal-directed bodily motion. Recently, sensorimotor psychologists have provided empirically successful Bayesian models of motor control. These models describe how the motor system uses sensory input to select motor commands that promote goals set by high-level cognition. I highlight the impressive explanatory benefits offered by Bayesian models of motor control. I argue that our current best models assign explanatory centrality to a robust notion of mental representation. I deploy my analysis to (...) defend intentional realism, to rebut eliminativism and instrumentalism regarding mental representation, and to explore the relation between intentionality and normativity. (shrink)
According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic (...) and conditional inferences than standard Bayesian conditionalization, is able to characterise the special value of logically valid argument schemes in uncertain reasoning contexts, greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic. (shrink)
Bayesian reasoning has been applied formally to statistical inference, machine learning and analysing scientific method. Here I apply it informally to more common forms of inference, namely natural language arguments. I analyse a variety of traditional fallacies, deductive, inductive and causal, and find more merit in them than is generally acknowledged. Bayesian principles provide a framework for understanding ordinary arguments which is well worth developing.
Even if our justified beliefs are closed under known entailment, there may still be instances of transmission failure. Transmission failure occurs when P entails Q, but a subject cannot acquire a justified belief that Q by deducing it from P. Paradigm cases of transmission failure involve inferences from mundane beliefs (e.g., that the wall in front of you is red) to the denials of skeptical hypotheses relative to those beliefs (e.g., that the wall in front of you is not white (...) and lit by red lights). According to the Bayesian explanation, transmission failure occurs when (i) the subject’s belief that P is based on E, and (ii) P(Q|E) P(Q). No modifications of the Bayesian explanation are capable of accommodating such cases, so the explanation must be rejected as inadequate. Alternative explanations employing simple subjunctive conditionals are fully capable of capturing all of the paradigm cases, as well as those missed by the Bayesian explanation. (shrink)
There is a certain excitement in vision science concerning the idea of applying the tools of bayesian decision theory to explain our perceptual capacities. Bayesian models are thought to be needed to explain how the inverse problem of perception is solved, and to rescue a certain constructivist and Kantian way of understanding the perceptual process. Anticlimactically, I argue both that bayesian outlooks do not constitute good solutions to the inverse problem, and that they are not constructivist in (...) nature. In explaining how visual systems derive a single percept from underdetermined stimulation, orthodox versions of bayesian accounts encounter a problem. The problem shows that such accounts need to be grounded in Natural Scene Statistics, an approach that takes seriously the Gibsonian insight that studying perception involves studying the statistical regularities of the environment in which we are situated. Additionally, I argue that bayesian frameworks postulate structures that hardly rescue a constructivist way of understanding perception. Except for percepts, the posits of bayesian theory are not representational in nature. bayesian perceptual inferences are not genuine inferences. They are biased processes that operate over nonrepresentational states. (shrink)
This article proposes a formal model that integrates cognitive and psychodynamic psychotherapeutic models of psychopathy to show how two major psychopathic traits called lacks remorse and self-aggrandizing can be understood as a form of abnormal Bayesian inference about the self. This model draws on the predictive coding (i.e., active inference) framework, a neurobiologically plausible explanatory framework for message passing in the brain that is formalized in terms of hierarchical Bayesian inference. In summary, this model proposes that these two (...) cardinal psychopathic traits reflect entrenched maladaptive Bayesian inferences about the self, which defend against the experience of deep-seated, self-related negative emotions, specifically shame and worthlessness. Support for the model in extant research on the neurobiology of psychopathy and quantitative simulations are provided. Finally, we offer a preliminary overview of a novel treatment for psychopathy that rests on our Bayesian formulation. (shrink)
A piece of folklore enjoys some currency among philosophical Bayesians, according to which Bayesian agents that, intuitively speaking, spread their credence over the entire space of available hypotheses are certain to converge to the truth. The goals of the present discussion are to show that kernel of truth in this folklore is in some ways fairly small and to argue that Bayesian convergence-to-the-truth results are a liability for Bayesianism as an account of rationality, since they render a certain (...) sort of arrogance rationally mandatory. (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, (...) another application of my account to the problem of evidential diversity is also discussed. (shrink)
It is argued that the high degree of trust in the Higgs particle before its discovery raises the question of a Bayesian perspective on data analysis in high energy physics in an interesting way that differs from other suggestions regarding the deployment of Bayesian strategies in the field.
A widely shared view in the cognitive sciences is that discovering and assessing explanations of cognitive phenomena whose production involves uncertainty should be done in a Bayesian framework. One assumption supporting this modelling choice is that Bayes provides the best approach for representing uncertainty. However, it is unclear that Bayes possesses special epistemic virtues over alternative modelling frameworks, since a systematic comparison has yet to be attempted. Currently, it is then premature to assert that cognitive phenomena involving uncertainty are (...) best explained within the Bayesian framework. As a forewarning, progress in cognitive science may be hindered if too many scientists continue to focus their efforts on Bayesian modelling, which risks to monopolize scientific resources that may be better allocated to alternative approaches. (shrink)
The detailed analysis of a particular quasi-historical numerical example is used to illustrate the way in which a Bayesian personalist approach to scientific inference resolves the Duhemian problem of which of a conjunction of hypotheses to reject when they jointly yield a prediction which is refuted. Numbers intended to be approximately historically accurate for my example show, in agreement with the views of Lakatos, that a refutation need have astonishingly little effect on a scientist's confidence in the ‘hard core’ (...) of a successful research programme even when a comparable confirmation would greatly enhance that confidence. Timeo Danaos et dona ferentis. (shrink)
Bayesianism is our leading theory of uncertainty. Epistemology is defined as the theory of knowledge. So “Bayesian Epistemology” may sound like an oxymoron. Bayesianism, after all, studies the properties and dynamics of degrees of belief, understood to be probabilities. Traditional epistemology, on the other hand, places the singularly non-probabilistic notion of knowledge at centre stage, and to the extent that it traffics in belief, that notion does not come in degrees. So how can there be a Bayesian epistemology?
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail (...) not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support (...) this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U-shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U-shaped trend. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
This paper examines the standard Bayesian solution to the Quine–Duhem problem, the problem of distributing blame between a theory and its auxiliary hypotheses in the aftermath of a failed prediction. The standard solution, I argue, begs the question against those who claim that the problem has no solution. I then provide an alternative Bayesian solution that is not question-begging and that turns out to have some interesting and desirable properties not possessed by the standard solution. This solution opens (...) the way to a satisfying treatment of a problem concerning ad hoc auxiliary hypotheses. (shrink)
We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
Viewing the brain as an organ of approximate Bayesian inference can help us understand how it represents the self. We suggest that inferred representations of the self have a normative function: to predict and optimise the likely outcomes of social interactions. Technically, we cast this predict-and-optimise as maximising the chance of favourable outcomes through active inference. Here the utility of outcomes can be conceptualised as prior beliefs about final states. Actions based on interpersonal representations can therefore be understood as (...) minimising surprise – under the prior belief that one will end up in states with high utility. Interpersonal representations thus serve to render interactions more predictable, while the affective valence of interpersonal inference renders self-perception evaluative. Distortions of self-representation contribute to major psychiatric disorders such as depression, personality disorder and paranoia. The approach we review may therefore operationalise the study of interpersonal representations in pathological states. (shrink)
In several papers, John Norton has argued that Bayesianism cannot handle ignorance adequately due to its inability to distinguish between neutral and disconfirming evidence. He argued that this inability sows confusion in, e.g., anthropic reasoning in cosmology or the Doomsday argument, by allowing one to draw unwarranted conclusions from a lack of knowledge. Norton has suggested criteria for a candidate for representation of neutral support. Imprecise credences (families of credal probability functions) constitute a Bayesian-friendly framework that allows us to (...) avoid inadequate neutral priors and better handle ignorance. The imprecise model generally agrees with Norton's representation of ignorance but requires that his criterion of self-duality be reformulated or abandoned. (shrink)
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus (...) to decide which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
Bayesian models are often criticized for postulating computations that are computationally intractable (e.g., NP-hard) and therefore implausibly performed by our resource-bounded minds/brains. Our letter is motivated by the observation that Bayesian modelers have been claiming that they can counter this charge of “intractability” by proposing that Bayesian computations can be tractably approximated. We would like to make the cognitive science community aware of the problematic nature of such claims. We cite mathematical proofs from the computer science literature (...) that show intractable Bayesian computations, such as postulated in existing Bayesian models, cannot be tractably approximated. This does not mean that human brains do not (or cannot) implement the type of algorithms that Bayesian modelers are advancing, but it does mean that proposing that they do by itself does nothing to parry the charge of intractability, because the postulated algorithms are as intractable (i.e., require exponential time) as the computations they try to approximate. Besides our negative message for the community, our letter also makes a positive contribution by referring to a methodology that Bayesian modelers can use to try and parry the charge of intractability in a mathematically sound way. (shrink)
Say that an agent is "epistemically humble" if she is less than certain that her opinions will converge to the truth, given an appropriate stream of evidence. Is such humility rationally permissible? According to the orgulity argument : the answer is "yes" but long-run convergence-to-the-truth theorems force Bayesians to answer "no." That argument has no force against Bayesians who reject countable additivity as a requirement of rationality. Such Bayesians are free to count even extreme humility as rationally permissible.
Various sexist and racist beliefs ascribe certain negative qualities to people of a given sex or race. Epistemic allies are people who think that in normal circumstances rationality requires the rejection of such sexist and racist beliefs upon learning of many counter-instances, i.e. members of these groups who lack the target negative quality. Accordingly, epistemic allies think that those who give up their sexist or racist beliefs in such circumstances are rationally responding to their evidence, while those who do not (...) are irrational in failing to respond to their evidence by giving up their belief. This is a common view among philosophers and non-philosophers. But epistemic allies face three problems. First, sexist and racist beliefs often involve generic propositions. These sorts of propositions are notoriously resilient in the face of counter-instances since the truth of generic propositions is typically compatible with the existence of many counter-instances. Second, background beliefs can enable one to explain away counter-instances to one’s beliefs. So even when counter-instances might otherwise constitute strong evidence against the truth of the generic, the ability to explain the counter-instances away with relevant background beliefs can make it rational to retain one’s belief in the generic despite the existence of many counter-instances. The final problem is that the kinds of judgements epistemic allies want to make about the irrationality of sexist and racist beliefs upon encountering many counter-instances is at odds with the judgements that we are inclined to make in seemingly parallel cases about the rationality of non-sexist and non-racist generic beliefs. Thus epistemic allies may end up having to give up on plausible normative supervenience principles. All together, these problems pose a significant prima facie challenge to epistemic allies. In what follows I explain how a Bayesian approach to the relation between evidence and belief can neatly untie these knots. The basic story is one of defeat: Bayesianism explains when one is required to become increasingly confident in chance propositions, and confidence in chance propositions can make belief in corresponding generics irrational. (shrink)
According to the comparative Bayesian concept of confirmation, rationalized versions of creationism come out as empirically confirmed. From a scientific viewpoint, however, they are pseudo-explanations because with their help all kinds of experiences are explainable in an ex-post fashion, by way of ad-hoc fitting of an empirically empty theoretical framework to the given evidence. An alternative concept of confirmation that attempts to capture this intuition is the use novelty criterion of confirmation. Serious objections have been raised against this criterion. (...) In this paper I suggest solutions to these objections. Based on them, I develop an account of genuine confirmation that unifies the UN-criterion with a refined probabilistic confirmation concept that is explicated in terms of the confirmation of evidence-transcending content parts of the hypothesis. (shrink)
Occam's razor—the idea that all else being equal, we should pick the simpler hypothesis—plays a prominent role in ordinary and scientific inference. But why are simpler hypotheses better? One attractive hypothesis known as Bayesian Occam's razor is that more complex hypotheses tend to be more flexible—they can accommodate a wider range of possible data—and that flexibility is automatically penalized by Bayesian inference. In two experiments, we provide evidence that people's intuitive probabilistic and explanatory judgments follow the prescriptions of (...) BOR. In particular, people's judgments are consistent with the two most distinctive characteristics of BOR: They penalize hypotheses as a function not only of their numbers of free parameters but also as a function of the size of the parameter space, and they penalize those hypotheses even when their parameters can be “tuned” to fit the data better than comparatively simpler hypotheses. (shrink)
We appeal to the theory of Bayesian Networks to model different strategies for obtaining confirmation for a hypothesis from experimental test results provided by less than fully reliable instruments. In particular, we consider (i) repeated measurements of a single test consequence of the hypothesis, (ii) measurements of multiple test consequences of the hypothesis, (iii) theoretical support for the reliability of the instrument, and (iv) calibration procedures. We evaluate these strategies on their relative merits under idealized conditions and show some (...) surprising repercussions on the variety-of-evidence thesis and the Duhem-Quine thesis. (shrink)