(Canadian Journal of Philosophy 37 (2007), pp. 111-127) A popular view about why death is bad for the one who dies is that death deprives its subject of the good things in life. This is the “deprivation account” of the evil of death. There is another view about death that seems incompatible with the deprivation account: the view that a person’s death is less bad if she has lived a good life. I give some arguments against this view and defend (...) the deprivation account. Penultimate draft posted with kind permission of the Canadian Journal of Philosophy; please use published version for citations. (shrink)
How ought we learn causal relationships? While Popper advocated a hypothetico-deductive logic of causal discovery, inductive accounts are currently in vogue. Many inductive approaches depend on the causal Markov condition as a fundamental assumption. This condition, I maintain, is not universally valid, though it is justiﬁable as a default assumption. In which case the results of the inductive causal learning procedure must be tested before they can be accepted. This yields a synthesis of the hypothetico-deductive and inductive accounts, which forms (...) the focus of this paper. I discuss the justiﬁcation of this synthesis and draw an analogy between objective Bayesianism and the account of causal learning presented here. (shrink)
Inductive logic admits a variety of semantics (Haenni et al., 2011, Part 1). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010, Chapter 7). §1 introduces the semantics and then, in §2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008). §3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) (...) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indiﬀerence but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientiﬁc theorising. (shrink)
Bayesians hold that probability is a mental notion: saying that the probability of rain is 0.7 is just saying that you believe it will rain to degree 0.7. Degrees of belief are themselves cashed out in terms of bets—in this case you consider 7 : 3 to be fair odds for a bet on rain. There are two extreme Bayesian positions. Strict Subjectivists think that an agent can adopt whatever degrees of belief she likes, as long as they satisfy the (...) axioms of probability. Thus your degree of belief in rain and degree of belief in no rain must sum to one but are otherwise unconstrained. At the other extreme, objectivists claim that an agent’s background knowledge considerably narrows down the choice of appropriate degrees of belief. In particular, if you know only that the frequency of rain is 0.7 then you should believe it will rain to degree 0.7; if you know absolutely nothing about the weather then you should set your degree of belief in rain to be 0.5; in neither of these cases is there room for subjective choice of degree of belief. In this book, Jeﬀrey advocates what is sometimes called empirically-based subjectivism, a position that lies between the two extremes of strict subjectivism and objectivism. According to this position, knowledge of frequencies constrains degree of belief, but lack of knowledge does not impose any constraints, so that if you know nothing about the weather you may adopt any degree of belief in rain you like.1 The aim of the book isn’t so much to justify this point of view as to provide a comprehensive exposition of probability theory from the perspective that it oﬀers. The book succeeds admirably: Jeﬀrey presents a broad range of standard topics concerning Bayesianism, including the betting interpretation of degrees of belief, a discussion of objective chance, the application of Bayesianism to scientiﬁc reasoning, conditionalisation, expectation, exchangeability and decision theory. Naturally much of the discussion of these topics focuses on Jeﬀrey’s own multifarious contributions to the subject.. (shrink)
Machamer, Darden and Craver: ‘Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions.’ (Machamer, Darden and Craver 2000 p3.) Glennan: ‘A mechanism for a behavior is a complex system that produces that behavior by the interaction of a number of parts, where the interactions between parts can be characterized by direct, invariant, change-relating generalizations.’ (Glennan 2002b pS344.) Bechtel and Abrahamsen: ‘A mechanism is a structure performing a (...) function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena.’ (Bechtel and Abrahamsen 2005 p423.). (shrink)
Practical reasoning requires decision—making in the face of uncertainty. Xenelda has just left to go to work when she hears a burglar alarm. She doesn’t know whether it is hers but remembers that she left a window slightly open. Should she be worried? Her house may not be being burgled, since the wind or a power cut may have set the burglar alarm off, and even if it isn’t her alarm sounding she might conceivably be being burgled. Thus Xenelda can (...) not be certain that her house is being burgled, and the decision that she takes must be based on her degree of certainty, together with the possible outcomes of that decision. (shrink)
this paper we argue that the formalism can also be applied to modelling the hierarchical structure of physical mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations are vital for prediction, explanation and control respectively, a recursive Bayesian net can be applied to all these tasks. We show how a Recursive Bayesian Net can be used to model mechanisms in cancer science. (...) The highest level of the proposed model will contain variables at the clinical level, while a middle level will map the structure of the DNA damage response mechanism and the lowest level will contain information about gene expression. (shrink)
In this chapter I discuss connections between machine learning and the philosophy of science. First I consider the relationship between the two disciplines. There is a clear analogy between hypothesis choice in science and model selection in machine learning. While this analogy has been invoked to argue that the two disciplines are essentially doing the same thing and should merge, I maintain that the disciplines are distinct but related and that there is a dynamic interaction operating between the two: a (...) series of mutually beneficial interactions that changes over time. I will introduce some particularly fruitful interactions, in particular the consequences of automated scientific discovery for the debate on inductivism versus falsificationism in the philosophy of science, and the importance of philosophical work on Bayesian epistemology and causality for contemporary machine learning. I will close by suggesting the locus of a possible future interaction: evidence integration. (shrink)
This introduction to the volume begins with a manifesto that puts forward two theses: ﬁrst, that the sciences are the best place to turn in order to understand causality; second, that scientiﬁcally-informed philosophical investigation can bring something to the sciences too. Next, the chapter goes through the various parts of the volume, drawing out relevant background and themes of the chapters in those parts. Finally, the chapter discusses the progeny of the papers and identiﬁes some next steps for research into (...) causality in the sciences. (shrink)
Bayesian theory now incorporates a vast body of mathematical, statistical and computational techniques that are widely applied in a panoply of disciplines, from artificial intelligence to zoology. Yet Bayesians rarely agree on the basics, even on the question of what Bayesianism actually is. This book is about the basics e about the opportunities, questions and problems that face Bayesianism today.
The paper argues that, although a distinction between a priori and a posteriori knowledge (or justification) can be drawn, it is a superficial one, of little theoretical significance. The point is not that the distinction has borderline cases, for virtually all useful distinctions have such cases. Rather, it is argued by means of an example, the differences even between a clear case of a priori knowledge and a clear case of a posteriori knowledge may be superficial ones. In both cases, (...) experience plays a role that is more than purely enabling but less than strictly evidential. It is also argued that the cases at issue are not special, but typical of a wide range of others, including knowledge of axioms of set theory and of elementary logical truths. Attempts by Quine and others to make all knowledge a posteriori (‘empirical’) are repudiated. The paper ends with a call for a new framework to be developed for analysing the epistemology of cognitive uses of the imagination. (shrink)
The Linguistic Turn is the title of an influential anthology edited by Richard Rorty, published in 1967. In his introduction, Rorty explained: The purpose of the present volume is to provide materials for reflection on the most recent philosophical revolution, that of linguistic philosophy. I shall mean by “linguistic philosophy” the view that philosophical problems are problems which may be solved (or dissolved) either by reforming language, or by understanding more about the language we presently use. (1967: 3) ‘The linguistic (...) turn’ has subsequently become a standard vague phrase for a diffuse event — some regard it as the event — in twentieth century philosophy, one not confined to signed-up linguistic philosophers in Rorty’s sense. For those who took the turn, language was somehow the central theme of philosophy. There is an increasingly widespread sense that the linguistic turn is past. In this essay I ask how far the turn has been, or should be, reversed. (shrink)
1. As John Hawthorne and Maria Lasonen-Aarnio appreciate, some of the central issues raised in their ‘Knowledge and Objective Chance’ arise for all but the most extreme theories of knowledge. In a wide range of cases, according to very plausible everyday judgments, we know something about the future, even though, according to quantum mechanics, our belief has a small but nonzero chance (objective probability) of being untrue. In easily constructed examples, we are in that position simultaneously with respect to many (...) different propositions about the future that are equiprobable and probabilistically independent of each other, at least to a reasonable approximation. (shrink)
Some systems of modal logic, such as S5, which are often used as epistemic logics with the ‘necessity’ operator read as ‘the agent knows that’, are problematic as general epistemic logics for agents whose computational capacity does not exceed that of a Turing machine because they impose unwarranted constraints on the agent’s theory of non-epistemic aspects of the world, for example by requiring the theory to be decidable rather than merely recursively axiomatizable. To generalize this idea, two constraints on an (...) epistemic logic are formulated: r.e. conservativeness, that any recursively enumerable theory R in the sublanguage without the epistemic operator is conservatively extended by some recursively enumerable theory in the language with the epistemic operator which is permitted by the logic to be the agent’s overall theory; the weaker requirement of r.e. quasi-conservativeness is similar except for applying only when R is consistent. The logic S5 is not even r.e. quasiconservative; this result is generalized to many other modal logics. However, it is also proved that the modal logics S4, Grz and KDE are r.e. quasi-conservative and that K4, KE and the provability logic GLS are r.e. conservative. Finally, r.e. conservativeness and r.e. quasiconservativeness are compared with related non-computational constraints. (shrink)
The theory of belief revision and merging has recently been applied to judgement aggregation. In this paper I argue that judgements are best aggregated by merging the evidence on which they are based, rather than by directly merging the judgements themselves. This leads to a threestep strategy for judgement aggregation. First, merge the evidence bases of the various agents using some method of belief merging. Second, determine which degrees of belief one should adopt on the basis of this merged evidence (...) base, by applying objective Bayesian theory. Third, determine which judgements are appropriate given these degrees of belief by applying a decision-theoretic account of rational judgement formation. (shrink)
ϕ1, . . . , ϕn |≈ ψ? Here ϕ1, . . . , ϕn, ψ are premisses of some formal language, such as a propositional language or a predicate language. |≈ is an entailment relation: the entailment holds if all models of the premisses also satisfy the conclusion, where the logic provides some suitable notion of ‘model’ and ‘satisfy’. Proof theory is normally invoked to answer a question of this form: one tries to prove the conclusion from the premisses (...) in a finite sequence of steps, where at each step one invokes an axiom or applies a rule of inference. (shrink)
How should we reason with causal relationships? Much recent work on this question has been devoted to the theses (i) that Bayesian nets provide a calculus for causal reasoning and (ii) that we can learn causal relationships by the automated learning of Bayesian nets from observational data. The aim of this book is to..
This chapter addresses two questions: what are causal relationships? how can one discover causal relationships? I provide a survey of the principal answers given to these questions, followed by an introduction to my own view, epistemic causality, and then a comparison of epistemic causality with accounts provided by Judea Pearl and Huw Price.
We present a new framework for combining logic with probability, and demonstrate the application of this framework to breast cancer prognosis. Background knowledge concerning breast cancer prognosis is represented using logical arguments. This background knowledge and a database are used to build a Bayesian net that captures the probabilistic relationships amongst the variables. Causal hypotheses gleaned from the Bayesian net in turn generate new arguments. The Bayesian net can be queried to help decide when one argument attacks another. The Bayesian (...) net is used to perform the prognosis, while the argumentation framework is used to provide a qualitative explanation of the prognosis. (shrink)
How is probability related to logic? Should probability and logic be combined? If so, how? Bayesianism tells us we ought to reason probabilistically. In that sense, probability theory is logic. How then does probability theory relate to classical logic and the various non-classical logics that also stake a claim on normative reasoning? Is probability theory to be preferred over other logics or vice versa? Is probability theory to be used in some situations, and the other logics in other situations? Or (...) should probability be combined with other logics? (shrink)
Evidence can be complex in various ways: e.g., it may exhibit structural complexity, containing information about causal, hierarchical or logical structure as well as empirical data, or it may exhibit combinatorial complexity, containing a complex combination of kinds of information. This paper examines evidential complexity from the point of view of Bayesian epistemology, asking: how should complex evidence impact on an agent’s degrees of belief? The paper presents a high-level overview of an objective Bayesian answer: it presents the objective Bayesian (...) norms concerning the relation between evidence and degrees of belief, and goes on to show how evidence of causal, hierarchical and logical structure lead to natural constraints on degrees of belief. The objective Bayesian network formalism is presented, and it is shown how this formalism can be used to handle both kinds of evidential complexity—structural complexity and combinatorial complexity. (shrink)
This paper is a comparison of how first-order Kyburgian Evidential Probability (EP), second-order EP, and objective Bayesian epistemology compare as to the KLM system-P rules for consequence relations and the monotonic / non-monotonic divide.
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches.
We argue that the health sciences make causal claims on the basis of evidence both of physical mechanisms and of probabilistic dependencies. Consequently, an analysis of causality solely in terms of physical mechanisms, or solely in terms of probabilistic relationships, does not do justice to the causal claims of these sciences. Yet there seems to be a single relation of cause in these sciences—pluralism about causality will not do either. Instead, we maintain, the health sciences require a theory of causality (...) that unifies its mechanistic and probabilistic aspects. We argue that the epistemic theory of causality provides the required unification. (shrink)
How should probabilities be interpreted in causal models in the social and health sciences? In this paper we take a step towards answering this question by investigating the case of cancer in epidemiology and arguing that the objective Bayesian interpretation is most appropriate in this domain.
We consider the use of intervention data for eliminating the underdetermination in statistical modelling, and for guiding extensions of the statistical models. The leading example is factor analysis, a major statistical tool in the social sciences. We first relate indeterminacy in factor analysis to the problem of underdetermination. Then we draw a parallel between factor analysis models and Bayesian networks with hidden nodes, which allows us to clarify the use of intervention data for dealing with indeterminacy. It will be shown (...) that in some cases, the indeterminacy can be resolved by an intervention. In the other cases, the intervention data suggest specific extensions of the model. The upshot is that intervention data can replace theoretical criteria that are typically employed in resolving underdetermination and theory change. (shrink)
How ought we learn causal relationships? While Popper advocated a hypothetico-deductive logic of causal discovery, inductive accounts are currently in vogue. Many inductive approaches depend on the causal Markov condition as a fundamental assumption. This condition, I maintain, is not universally valid, though it is justifiable as a default assumption. In which case the results of the inductive causal learning procedure must be tested before they can be accepted. This yields a synthesis of the hypothetico-deductive and inductive accounts, which forms (...) the focus of this paper. I discuss the justification of this synthesis and draw an analogy between objective Bayesianism and the account of causal learning presented here. (shrink)
The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular (...) how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell’s mechanism for apoptosis. (shrink)
Kyburg goes half-way towards objective Bayesianism. He accepts that frequencies constrain rational belief to an interval but stops short of isolating an optimal degree of belief within this interval. I examine the case for going the whole hog.
After introducing a range of mechanistic theories of causality and some of the problems they face, I argue that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient. I describe one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.
I present a formalism that combines two methodologies: objective Bayesianism and Bayesian nets. According to objective Bayesianism, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwise be as non-committal as possible (i.e. have maximum entropy). Bayesian nets offer an efficient way of representing and updating probability functions. An objective Bayesian net is a Bayesian net representation of the maximum entropy probability function.
Cancer treatment decisions should be based on all available evidence. But this evidence is complex and varied: it includes not only the patient’s symptoms and expert knowledge of the relevant causal processes, but also clinical databases relating to past patients, databases of observations made at the molecular level, and evidence encapsulated in scientific papers and medical informatics systems. Objective Bayesian nets offer a principled path to knowledge integration, and we show in this chapter how they can be applied to integrate (...) various kinds of evidence in the cancer domain. This is important from the systems biology perspective, which needs to integrate data that concern different levels of analysis, and is also important from the point of view of medical informatics. (shrink)
This paper develops connections between objective Bayesian epistemology—which holds that the strengths of an agent’s beliefs should be representable by probabilities, should be calibrated with evidence of empirical probability, and should otherwise be equivocal—and probabilistic logic. After introducing objective Bayesian epistemology over propositional languages, the formalism is extended to handle predicate languages. A rather general probabilistic logic is formulated and then given a natural semantics in terms of objective Bayesian epistemology. The machinery of objective Bayesian nets and objective credal nets (...) is introduced and this machinery is applied to provide a calculus for probabilistic logic that meshes with the objective Bayesian semantics. (shrink)
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.
This chapter provides an overview of a range of probabilistic theories of causality, including those of Reichenbach, Good and Suppes, and the contemporary causal net approach. It discusses two key problems for probabilistic accounts: counterexamples to these theories and their failure to account for the relationship between causality and mechanisms. It is argued that to overcome the problems, an epistemic theory of causality is required.
This posthumous work was produced by transcribing audio recordings of lectures that Bruno de Finetti gave at the National Institute for Advanced Mathematics in Rome in 1979. Alberto Mura attended the course, recorded the lectures, took notes and edited the resulting volume, which was first published in Italian in 1995. Hykel Hosni translated the lectures for this English edition, which appears in the Synthese Library series of volumes on epistemology, logic, methodology and philosophy of science. The book contains an introductory (...) essay about de Finetti by Maria Carla Galavotti. Three of the twenty-two lectures and part of a fourth are lost, but the remaining lectures have many useful editorial comments. Moreover, interesting discussion between de Finetti and those attending the course is also included. So we have many people to thank for this important text. De Finetti wrote the following notice to advertise the course at the Institute: The course, with a deliberately generic title [‘On Probability’] will deal with the conceptual and controversial questions on the subject of probability: questions which it is necessary to resolve, one way or another, so that the development of reasoning is not reduced to a mere formalistic game of mathematical expressions or to vacuous and simplistic pseudophilosophical statements or allegedly practical claims. Since de Finetti was a key figure in the development of the conceptual foundations of probability, these lectures will be of great interest to philosophers of probability in particular, and to epistemologists and philosophers of mathematics and science in general. This new English edition is very welcome indeed. De Finetti is known as a champion of the strictly subjective interpretation of probability. According to this view, probabilities are to be construed as degrees of belief, and are thus defined in relation to an agent holding those beliefs. These 1 degrees of belief are subject to a rather weak normative constraint—coherence, which merely demands that degrees of belief satisfy the axioms of probability— but otherwise it is left up to the agent as to how to apportion her degrees of belief.. (shrink)
This volume arose out of an international, interdisciplinary academic network on Probabilistic Logic and Probabilistic Networks involving four of us (Haenni, Romeijn, Wheeler and Williamson), called Progicnet and funded by the Leverhulme Trust from 2006–8. Many of the papers in this volume were presented at an associated conference, the Third Workshop on Combining Probability and Logic (Progic 2007), held at the University of Kent on 5–7 September 2007. The papers in this volume concern either the special focus on the connection (...) between probabilistic logic and probabilistic networks or the more general question of the links between probability and logic. Here we introduce probabilistic logic, probabilistic networks, current and future directions of research and also the themes of the papers that follow. (shrink)
Mechanisms have become much-discussed, yet there is still no consensus on how to characterise them. In this paper, we start with something everyone is agreed on – that mechanisms explain – and investigate what constraints this imposes on our metaphysics of mechanisms. We examine two widely shared premises about how to understand mechanistic explanation: (1) that mechanistic explanation offers a welcome alternative to traditional laws-based explanation and (2) that there are two senses of mechanistic explanation that we call ‘epistemic explanation’ (...) and ‘physical explanation’. We argue that mechanistic explanation requires that mechanisms are both real and local. We then go on to argue that real, local mechanisms require a broadly active metaphysics for mechanisms, such as a capacities metaphysics. (shrink)
Xenotransplantation - the transfer of living tissue between species - has long been heralded as a potential solution to the severe organ shortage crisis experienced by the United Kingdom and other 'developed' nations. However, the significant risks which accompany this biotechnology led the United Kingdom to adopt a cautious approach to its regulation, with the establishment of a non-departmental public body - UKXIRA - to oversee the development of this technology on a national basis. In December 2006 UKXIRA was quietly (...) disbanded and replaced with revised guidance, which entrusts the regulation of xenotransplantation largely to research ethics committees. In this article we seek to problematize this new regulatory framework, arguing that specialist expertise and national oversight are necessary components of an adequate regulatory framework for a biotechnology which poses new orders of risk, challenges the adequacy of traditional understandings of autonomy and consent, and raises significant animal welfare concerns. We argue for a more considered and holistic approach, based on adequate consultation, to regulating biotechnological developments in the United Kingdom. (shrink)
Audi explains what he means by ‘normative’ in the case of belief: cognitive (epistemic) normativity is a matter of what ought to be believed, where the force of the “ought” is in part to attribute liability to criticism and negative (disapproving) attitudes toward the person(s) in question.
In this paper, we examine what is to be said in defence of Machamer, Darden and Craver’s (MDC) controversial dualism about activities and entities (Machamer, Darden and Craver’s in Philos Sci 67:1–25, 2000). We explain why we believe the notion of an activity to be a novel, valuable one, and set about clearing away some initial objections that can lead to its being brushed aside unexamined. We argue that substantive debate about ontology can only be effective when desiderata for an (...) ontology are explicitly articulated. We distinguish three such desiderata. The first is a more permissive descriptive ontology of science, the second a more reductive ontology prioritising understanding, and the third a more reductive ontology prioritising minimalism. We compare MDC’s entities-activities ontology to its closest rival, the entities-capacities ontology, and argue that the entities-activities ontology does better on all three desiderata. (shrink)
Second-order logic and modal logic are both, separately, major topics of philosophical discussion. Although both have been criticized by Quine and others, increasingly many philosophers find their strictures uncompelling, and regard both branches of logic as valuable resources for the articulation and investigation of significant issues in logical metaphysics and elsewhere. One might therefore expect some combination of the two sorts of logic to constitute a natural and more comprehensive background logic for metaphysics. So it is somewhat surprising to find (...) that philosophical discussion of secondorder modal logic is almost totally absent, despite the pioneering contribution of Barcan.. (shrink)
The paper is a critique of the widespread conception of logic as a neutral arbiter between metaphysical theories, one that makes no `substantive’ claims of its own (David Kaplan and John Etchemendy are two recent examples). A familiar observation is that virtually every putatively fundamental principle of logic has been challenged over the last century on broadly metaphysical grounds (however mistaken), with a consequent proliferation of alternative logics. However, this apparent contentiousness of logic is often treated as though it were (...) neutralized by the possibility of studying all these alternative logics within an agreed metalogical framework, typically that of first-order logic with set theory. In effect, metalogic is given the role of neutral arbiter. The paper will consider a variety of examples in which deep logical disputes re-emerge at the meta-level. One case is quantified modal logic, where some varieties of actualism require a modal meta-language (as opposed to the usual non-modal language of possible worlds model theory) in order not to make their denial of the Barcan formula self-defeating. Similarly, on some views the intended model theory for second-order logic can only be given in a second-order metalanguage—this may be needed to avoid versions of Russell’s paradox when the first-order quantifiers are read as absolutely unrestricted. It can be shown that the phenomenon of higher-order vagueness eventually forces fuzzy logical treatments of vagueness to use a fuzzy metalanguage, with consequent repercussions for what first-order principles are validated. The difficulty of proving the completeness of first-order intuitionistic logic on its intended interpretation by intuitionistically rather than just classically valid means is a more familiar example. These case studies will be discussed in some detail to reveal a variety of ways in which even metalogic is metaphysically contested, substantial and non-neutral. (shrink)
The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
After a decade of intense debate about mechanisms, there is still no consensus characterization. In this paper we argue for a characterization that applies widely to mechanisms across the sciences. We examine and defend our disagreements with the major current contenders for characterizations of mechanisms. Ultimately, we indicate that the major contenders can all sign up to our characterization.
Contributors to this volume approach Rawls's idea from a number of perspectives: its philosophical foundations, institutional implications, and possible connections to the future of left-of-center politics.
According to Russo and Williamson (Int Stud Philos Sci 21(2):157–170, 2007, Hist Philos Life Sci 33:389–396, 2011a, Philos Sci 1(1):47–69, 2011b), in order to establish a causal claim of the form, ‘C is a cause of E’, one typically needs evidence that there is an underlying mechanism between C and E as well as evidence that C makes a difference to E. This thesis has been used to argue that hierarchies of evidence, as championed by evidence-based movements, tend to give (...) primacy to evidence of difference making over evidence of mechanisms and are flawed because the two sorts of evidence are required and they should be treated on a par. An alternative approach gives primacy to evidence of mechanism over evidence of difference making. In this paper, we argue that this alternative approach is equally flawed, again because both sorts of evidence need to be treated on a par. As an illustration of this parity, we explain how scientists working in the ‘EnviroGenomarkers’ project constantly make use of the two evidential components in a dynamic and intertwined way. We argue that such an interplay is needed not only for causal assessment but also for policy purposes. (shrink)
This note responds to some criticisms of my recent book In Defence of Objective Bayesianism that were provided by Gregory Wheeler in his ‘Objective Bayesian Calibration and the Problem of Non-convex Evidence’.
In response to Paul Boghossian's objections in ‘Inferentialism and the epistemology of logic’, this paper defends counterexamples offered by Paolo Casalegno and the author to an inferentialist account of what it is to understand a logical constant, on which Boghossian had relied in his explanation of our entitlement to reason according to basic logical principles. The importance for understanding is stressed of non-inferential aspects of the use of logical constants. Boghossian's criteria for individuating concepts are also queried.
In Crispin Wright's ‘Meaning and Assertibility’, the main point of disagreement with Paolo Casalegno's critique of verificationist semantics in ‘The Problem of Non-conclusiveness’ concerns Wright's diagnosis of one of Casalegno's arguments as depending on an over-estimation of the proper explanatory task of a semantic theory. The present note argues that there is no such dependence.
The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular (...) how a simple two-level RBN can be used tomodel a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell’s mechanism for apoptosis. (shrink)
There is a need for integrated thinking about causality, probability and mechanisms in scientific methodology. Causality and probability are long-established central concepts in the sciences, with a corresponding philosophical literature examining their problems. On the other hand, the philosophical literature examining mechanisms is not long-established, and there is no clear idea of how mechanisms relate to causality and probability. But we need some idea if we are to understand causal inference in the sciences: a panoply of disciplines, ranging from epidemiology (...) to biology, from econometrics to physics, routinely make use of probability, statistics, theory and mechanisms to infer causal relationships. -/- These disciplines have developed very different methods, where causality and probability often seem to have different understandings, and where the mechanisms involved often look very different. This variegated situation raises the question of whether the different sciences are really using different concepts, or whether progress in understanding the tools of causal inference in some sciences can lead to progress in other sciences. The book tackles these questions as well as others concerning the use of causality in the sciences. (shrink)
Generic versus single-case causality: the case of autopsy Content Type Journal Article Category Original Paper in Philosophy of Science Pages 47-69 DOI 10.1007/s13194-010-0012-4 Authors Federica Russo, Philosophy–SECL, University of Kent, CT2 7NF Canterbury, UK Jon Williamson, Philosophy–SECL, University of Kent, CT2 7NF Canterbury, UK Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 1.
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails (...) because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief . One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief. (shrink)
Abstract: Some proponents of “experimental philosophy” criticize philosophers' use of thought experiments on the basis of evidence that the verdicts vary with truth-independent factors. However, their data concern the verdicts of philosophically untrained subjects. According to the expertise defence, what matters are the verdicts of trained philosophers, who are more likely to pay careful attention to the details of the scenario and track their relevance. In a recent article, Jonathan M. Weinberg and others reply to the expertise defence that there (...) is no evidence for such expertise. They now receive a reply in this article, which argues that they have misconstrued the dialectical situation. Since they have produced no evidence that philosophical training is less efficacious for thought experimentation than for other cognitive tasks for which they acknowledge that it produces genuine expertise, such as informal argumentation, they have produced no evidence for treating the former more sceptically than the latter. (shrink)
Mental competence, or ‘mental capacity’ as it is referred to in recent legislation in the UK, is a concept that is expanding rapidly as a common currency in health and social care services. Neelke Doorn’s “Anthropological Reflection on the Concept of Competence” makes for fascinating and highly relevant reading and the legal and ethical discussions she describes taking place in the Netherlands would appear to echo many of those that have occurred in the UK over the last 5 to 10 (...) years, but with some significant differences. However, Doorn’s new conceptualization of mental competence causes some concerns, particularly if it was to be applied in services currently provided to people where their mental capacity may be .. (shrink)
In this paper, we compare the mechanisms of protein synthesis and natural selection. We identify three core elements of mechanistic explanation: functional individuation, hierarchical nestedness or decomposition, and organization. These are now well understood elements of mechanistic explanation in fields such as protein synthesis, and widely accepted in the mechanisms literature. But Skipper and Millstein have argued (2005) that natural selection is neither decomposable nor organized. This would mean that much of the current mechanisms literature does not apply to the (...) mechanism of natural selection. (shrink)
How strongly should you believe the various propositions that you can express? -/- That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective (...) Bayesianism is characterized by three norms: · Probability - degrees of belief should be probabilities · Calibration - they should be calibrated with evidence · Equivocation - they should otherwise equivocate between basic outcomes -/- Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough. -/- Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research. (shrink)