How do we go about weighing evidence, testing hypotheses, and making inferences? The model of " inference to the best explanation " -- that we infer the hypothesis that would, if correct, provide the best explanation of the available evidence--offers a compelling account of inferences both in science and in ordinary life. Widely cited by epistemologists and philosophers of science, IBE has nonetheless remained little more than a slogan. Now this influential work has been thoroughly revised and updated, and (...) features a new introduction and two new chapters. Inference to the Best Explanation is an unrivaled exposition of a theory of particular interest in the fields both of epistemology and the philosophy of science. (shrink)
What is the connection between justification and the kind of consequence relations that are studied by logic? In this essay, I shall try to provide an answer, by proposing a general conception of the kind of inference that counts as justified or rational.
Laurence BonJour and more recently James Beebe have argued that the best way to defend the claim that abduction or inference to the best explanation is epistemically justified is the rationalist view that it is justified a priori. However, rationalism about abduction faces a number of challenges. This chapter focuses on one particular, highly influential objection, that there is no interpretation of probability available which is compatible with rationalism about abduction. The rationalist who wants to maintain a strong connection (...) between epistemic justification and probability would do best to rely on a Keynesian interpretation of probability. However, the latter is vulnerable to Ramsey’s famous criticism that we do not seem to perceive or be aware of such probabilities. The chapter argues that Ramsey’s criticism is unsuccessful, and that there are good reasons to be optimistic about our ability to have access to probabilities relevant to abductive inference. (shrink)
Many epistemologists take Inference to the Best Explanation (IBE) to be “fundamental.” For instance, Lycan (1988, 128) writes that “all justified reasoning is fundamentally explanatory reasoning.” Conee and Feldman (2008, 97) concur: “fundamental epistemic principles are principles of best explanation.” Call them fundamentalists. They assert that nothing deeper could justify IBE, as is typically assumed of rules of deductive inference, such as modus ponens. However, logicians account for modus ponens with the valuation rule for the material conditional. By (...) contrast, fundamentalists account for IBE with an ill-defined set of relations that happen to furnish their favorite set of inductive inferences. To our eye, this seems a little too convenient—there is too much room for ad hoc, just-so stories about the “striking” correspondence between our explanatory and inductive practices. We will argue that the (explanatory) pluralism adopted by the leading theorists of the best explanation—philosophers of science—undermines fundamentalism. Section 1 clarifies fundamentalism’s key tenets. Section 2 presents pluralism’s challenge to fundamentalism. Section 3 considers a potential fundamentalist reply to this challenge. Sections 4 through 6 canvass the leading candidates for developing this fundamentalist reply, showing each to be unsatisfactory. (shrink)
I argue that inference can tolerate forms of self-ignorance and that these cases of inference undermine canonical models of inference on which inferrers have to appreciate (or purport to appreciate) the support provided by the premises for the conclusion. I propose an alternative model of inference that belongs to a family of rational responses in which the subject cannot pinpoint exactly what she is responding to or why, where this kind of self-ignorance does nothing to undermine (...) the intelligence of the response. (shrink)
Explanation is asymmetric: if A explains B, then B does not explain A. Tradition- ally, the asymmetry of explanation was thought to favor causal accounts of explanation over their rivals, such as those that take explanations to be inferences. In this paper, we develop a new inferential approach to explanation that outperforms causal approaches in accounting for the asymmetry of explanation.
In this paper, I argue that the “positive argument” for Constructive Empiricism (CE), according to which CE “makes better sense of science, and of scientific activity, than realism does” (van Fraassen 1980, 73), is an Inference to the Best Explanation (IBE). But constructive empiricists are critical of IBE, and thus they have to be critical of their own “positive argument” for CE. If my argument is sound, then constructive empiricists are in the awkward position of having to reject their (...) own “positive argument” for CE by their own lights. (shrink)
The overwhelming majority of those who theorize about implicit biases posit that these biases are caused by some sort of association. However, what exactly this claim amounts to is rarely specified. In this paper, I distinguish between different understandings of association, and I argue that the crucial senses of association for elucidating implicit bias are the cognitive structure and mental process senses. A hypothesis is subsequently derived: if associations really underpin implicit biases, then implicit biases should be modulated by counterconditioning (...) or extinction but should not be modulated by rational argumentation or logical interventions. This hypothesis is false; implicit biases are not predicated on any associative structures or associative processes but instead arise because of unconscious propositionally structured beliefs. I conclude by discussing how the case study of implicit bias illuminates problems with popular dual-process models of cognitive architecture. (shrink)
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...) single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity. (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My (...) argument focuses on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order (...) effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before. (shrink)
I argue that the accounts of inference recently presented (in this journal) by Paul Boghossian, John Broome, and Crispin Wright are unsatisfactory. I proceed in two steps: First, in Sects. 1 and 2, I argue that we should not accept what Boghossian calls the “Taking Condition on inference” as a condition of adequacy for accounts of inference. I present a different condition of adequacy and argue that it is superior to the one offered by Boghossian. More precisely, (...) I point out that there is an analog of Moore’s Paradox for inference; and I suggest that explaining this phenomenon is a condition of adequacy for accounts of inference. Boghossian’s Taking Condition derives its plausibility from the fact that it apparently explains the analog of Moore’s Paradox. Second, in Sect. 3, I show that neither Boghossian’s, nor Broome’s, nor Wright’s account of inference meets my condition of adequacy. I distinguish two kinds of mistake one is likely to make if one does not focus on my condition of adequacy; and I argue that all three—Boghossian, Broome, and Wright—make at least one of these mistakes. (shrink)
Inference to the Best Explanation (IBE) is widely criticized for being an unreliable form of ampliative inference – partly because the explanatory hypotheses we have considered at a given time may all be false, and partly because there is an asymmetry between the comparative judgment on which an IBE is based and the absolute verdict that IBE is meant to license. In this paper, I present a further reason to doubt the epistemic merits of IBE and argue that (...) it motivates moving to an inferential pattern in which IBE emerges as a degenerate limiting case. Since this inferential pattern is structurally similar to an argumentative strategy known as Inferential Robustness Analysis (IRA), it effectively combines the most attractive features of IBE and IRA into a unified approach to non-deductive inference. (shrink)
True beliefs and truth-preserving inferences are, in some sense, good beliefs and good inferences. When an inference is valid though, it is not merely truth-preserving, but truth-preserving in all cases. This motivates my question: I consider a Modus Ponens inference, and I ask what its validity in particular contributes to the explanation of why the inference is, in any sense, a good inference. I consider the question under three different definitions of ‘case’, and hence of ‘validity’: (...) the orthodox definition given in terms of interpretations or models, a metaphysical definition given in terms of possible worlds, and a substitutional definition defended by Quine. I argue that the orthodox notion is poorly suited to explain what's good about a Modus Ponens inference. I argue that there is something good that is explained by a certain kind of truth across possible worlds, but the explanation is not provided by metaphysical validity in particular; nothing of value is explained by truth across all possible worlds. Finally, I argue that the substitutional notion of validity allows us to correctly explain what is good about a valid inference. (shrink)
The paper addresses the phenomenology of inference. It proposes that the conscious character of conscious inferences is partly constituted by a sense of meaning; specifically, a sense of what Grice called ‘natural meaning’. In consciously drawing the (outright, categorical) conclusion that Q from a presumed fact that P, one senses the presumed fact that P as meaning that Q, where ‘meaning that’ expresses natural meaning. This sense of natural meaning is phenomenologically analogous, I suggest, to our sense of what (...) is said in fluently comprehending everyday utterances in our first language. The proposal that conscious inference involves a sense of natural meaning is compared with views according to which conscious inference involves taking the premises (i) to be good reasons for the conclusion (as defended by Thomson and Grice), (ii) to support it (as argued by Audi and, recently, Boghossian), or (iii) to imply it (as lately contended by Broome). I argue our proposal can explain certain phenomena handled by alternatives (i) and (ii), but that some further phenomena is handled by our account but not these alternatives. In relation to alternative (iii), I argue that, in so far as implicational and natural-meaning relations come apart, the latter are a better fit for what we sense or take to be so in conscious inference. (shrink)
In this paper I adduce a new argument in support of the claim that IBE is an autonomous form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the claim that induction is really IBE, and draw some normative conclusions.
Some theorists, ranging from W. James to contemporary psychologists, have argued that forgetting is the key to proper functioning of memory. The authors elaborate on the notion of beneficial forgetting by proposing that loss of information aids inference heuristics that exploit mnemonic information. To this end, the authors bring together 2 research programs that take an ecological approach to studying cognition. Specifically, they implement fast and frugal heuristics within the ACT-R cognitive architecture. Simulations of the recognition heuristic, which relies (...) on systematic failures of recognition to infer which of 2 objects scores higher on a criterion value, demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized. Simulations of the fluency heuristic, which arrives at the same inference on the basis of the speed with which objects are recognized, indicate that forgetting aids the discrimination between the objects' recognition speeds. (shrink)
Much of the recent work on the epistemology of causation has centered on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness Condition. Philosophical discussions of the latter condition have exhibited situations in which it is likely to fail. This paper studies the Causal Faithfulness Condition as a conjunction of weaker conditions. We show that some of the weaker conjuncts can be empirically tested, and hence do not have to be assumed a priori. Our results lead to (...) two methodologically significant observations: (1) some common types of counterexamples to the Faithfulness condition constitute objections only to the empirically testable part of the condition; and (2) some common defenses of the Faithfulness condition do not provide justification or evidence for the testable parts of the condition. It is thus worthwhile to study the possibility of reliable causal inference under weaker Faithfulness conditions. As it turns out, the modification needed to make standard procedures work under a weaker version of the Faithfulness condition also has the practical effect of making them more robust when the standard Faithfulness condition actually holds. This, we argue, is related to the possibility of controlling error probabilities with finite sample size (“uniform consistency”) in causal inference. (shrink)
This article generalizes the explanationist account of inference to the best explanation. It draws a clear distinction between IBE and abduction and presents abduction as the first step of IBE. The second step amounts to the evaluation of explanatory power, which consist in the degree of explanatory virtues that a hypothesis exhibits. Moreover, even though coherence is the most often cited explanatory virtue, on pain of circularity, it should not be treated as one of the explanatory virtues. Rather, coherence (...) should be equated with explanatory power and considered to be derivable from the other explanatory virtues: unification, explanatory depth and simplicity. (shrink)
In informal terms, abductive reasoning involves inferring the best or most plausible explanation from a given set of facts or data. It is a common occurrence in everyday life and crops up in such diverse places as medical diagnosis, scientific theory formation, accident investigation, language understanding, and jury deliberation. In recent years, it has become a popular and fruitful topic in artificial intelligence research. This volume breaks new ground in the scientific, philosophical, and technological study of abduction. It presents new (...) ideas about inferential and information-processing foundations for knowledge and certainty. The authors argue that knowledge arises from experience by processes of abductive inference, in contrast to the view that it arises non-inferentially, or that deduction and inductive generalization are enough to account for knowledge. Much AI research is hypothetical, so the importance of this book is that it reports key discoveries about abduction that have been made as a result of designing, building, testing, and analyzing actual working knowledge-based systems for medical diagnosis and other abductive tasks. The book tells the story of six generations of increasingly sophisticated generic abduction machines, RED-1, RED-2, PEIRCE, MDX2, TIPS, QUAWDS, and the discovery of reasoning strategies that make it computationally feasible to form well-justified composite explanatory hypotheses, despite the threat of combinatorial explosion. The final chapter argues that perception is logically abductive and presents a layered-abduction computational model of perceptual information processing. This book will be of great interest to researchers in AI, cognitive science, and philosophy of science. (shrink)
Inference versus consequence , an invited lecture at the LOGICA 1997 conference at Castle Liblice, was part of a series of articles for which I did research during a Stockholm sabbatical in the autumn of 1995. The article seems to have been fairly effective in getting its point across and addresses a topic highly germane to the Uppsala workshop. Owing to its appearance in the LOGICA Yearbook 1997 , Filosofia Publishers, Prague, 1998, it has been rather inaccessible. Accordingly it (...) is republished here with only bibliographical changes and an afterword. (shrink)
Much research on cognitive development focuses either on early-emerging domain-specific knowledge or domain-general learning mechanisms. However, little research examines how these sources of knowledge interact. Previous research suggests that young infants can make inferences from samples to populations (Xu & Garcia, 2008) and 11- to 12.5-month-old infants can integrate psychological and physical knowledge in probabilistic reasoning (Teglas, Girotto, Gonzalez, & Bonatti, 2007; Xu & Denison, 2009). Here, we ask whether infants can integrate a physical constraint of immobility into a statistical (...)inference mechanism. Results from three experiments suggest that, first, infants were able to use domain-specific knowledge to override statistical information, reasoning that sometimes a physical constraint is more informative than probabilistic information. Second, we provide the first evidence that infants are capable of applying domain-specific knowledge in probabilistic reasoning by using a physical constraint to exclude one set of objects while computing probabilities over the remaining sets. (shrink)
The aim of this book is to present the fundamental theoretical results concerning inference rules in deductive formal systems. Primary attention is focused on: admissible or permissible inference rules the derivability of the admissible inference rules the structural completeness of logics the bases for admissible and valid inference rules. There is particular emphasis on propositional non-standard logics (primary, superintuitionistic and modal logics) but general logical consequence relations and classical first-order theories are also considered. The book is (...) basically self-contained and special attention has been made to present the material in a convenient manner for the reader. Proofs of results, many of which are not readily available elsewhere, are also included. The book is written at a level appropriate for first-year graduate students in mathematics or computer science. Although some knowledge of elementary logic and universal algebra are necessary, the first chapter includes all the results from universal algebra and logic that the reader needs. For graduate students in mathematics and computer science the book is an excellent textbook. (shrink)
This article discusses how inference to the best explanation can be justified as a practical meta - argument. It is, firstly, justified as a practical argument insofar as accepting the best explanation as true can be shown to further a specific aim. And because this aim is a discursive one which proponents can rationally pursue in — and relative to — a complex controversy, namely maximising the robustness of one’s position, IBE can be conceived, secondly, as a meta - (...) argument. My analysis thus bears a certain analogy to Sellars ’ well - known justification of inductive reasoning ; it is based on recently developed theories of complex argumentation. (shrink)
The idea that knowledge can be extended by inference from what is known seems highly plausible. Yet, as shown by familiar preface paradox and lottery-type cases, the possibility of aggregating uncertainty casts doubt on its tenability. We show that these considerations go much further than previously recognized and significantly restrict the kinds of closure ordinary theories of knowledge can endorse. Meeting the challenge of uncertainty aggregation requires either the restriction of knowledge-extending inferences to single premises, or eliminating epistemic uncertainty (...) in known premises. The first strategy, while effective, retains little of the original idea—conclusions even of modus ponens inferences from known premises are not always known. We then look at the second strategy, inspecting the most elaborate and promising attempt to secure the epistemic role of basic inferences, namely Timothy Williamson’s safety theory of knowledge. We argue that while it indeed has the merit of allowing basic inferences such as modus ponens to extend knowledge, Williamson’s theory faces formidable difficulties. These difficulties, moreover, arise from the very feature responsible for its virtue- the infallibilism of knowledge. (shrink)
Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Philosophers of science and scientific (...) practitioners are challenged to reevaluate the assumptions of their own theories - philosophical or methodological. Practitioners may better appreciate the foundational issues around which their questions revolve and thereby become better 'applied philosophers'. Conversely, new avenues emerge for finally solving recalcitrant philosophical problems of induction, explanation and theory testing. (shrink)
An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a working (...) hypothesis in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparative probability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates. (shrink)
This monograph provides a new account of justified inference as a cognitive process. In contrast to the prevailing tradition in epistemology, the focus is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to (...) describe and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief. This text will be of interest to epistemologists and logicians, to all computer scientists who work on nonmonotonic reasoning and neural networks, and to cognitive scientists. (shrink)
In the Tractatus Wittgenstein criticizes Frege and Russell's view that laws of inference (Schlussgesetze) "justify" logical inferences. What lies behind this criticism, I argue, is an attack on Frege and Russell's conceptions of logical entailment. In passing, I examine Russell's dispute with Bradley on the question whether all relations are "internal".
In this chapter I examine past and recent theories of unconscious inference. Most theorists have ascribed inferences to perception literally, not analogically, and I focus on the literal approach. I examine three problems faced by such theories if their commitment to unconscious inferences is taken seriously. Two problems concern the cognitive resources that must be available to the visual system (or a more central system) to support the inferences in question. The third problem focuses on how the conclusions of (...) inferences are supposed to explain the phenomenal aspects of visual experience, the looks of things. Finally, in comparing past and recent responses to these problems, I provide an assessment of the current prospects for inferential theories. (This paper is reprinted in Hatfield 2009, Perception and Cognition: Essays in the Philosophy of Psychology, Clarendon Press, 124-152.). (shrink)
The underconsideration argument against inference to the best explanation and scientific realism holds that scientists are not warranted in inferring that the best theory is true, because scientists only ever conceive of a small handful of theories at one time, and as a result, they may not have considered a true theory. However, antirealists have not developed a detailed alternative account of why explanatory inference nevertheless appears so central to scientific practice. In this paper, I provide new defences (...) against some recent objections to the underconsideration argument, while also developing an account of explanatory inference that both survives these criticisms and does not entail realism. (shrink)
The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the (...) null against a well-specified alternative. We show how this requirement follows from the basic tenets of conventional and Bayesian probability. Moreover, we show in both the conventional and Bayesian framework that not specifying the alternative may lead to rejections of the null hypothesis with scant evidence. We review both frequentist and Bayesian approaches to specifying alternatives, and we show how such specifications improve inference. The field of cognitive science will benefit because consideration of reasonable alternatives will undoubtedly sharpen the intellectual underpinnings of research. (shrink)
The classical theory of semantic information (ESI), as formulated by Bar-Hillel and Carnap in 1952, does not give a satisfactory account of the problem of what information, if any, analytically and/or logically true sentences have to offer. According to ESI, analytically true sentences lack informational content, and any two analytically equivalent sentences convey the same piece of information. This problem is connected with Cohen and Nagel's paradox of inference: Since the conclusion of a valid argument is contained in the (...) premises, it fails to provide any novel information. Again, ESI does not give a satisfactory account of the paradox. In this paper I propose a solution based on the distinction between empirical information and analytic information. Declarative sentences are informative due to their meanings. I construe meanings as structured hyperintensions, modelled in Transparent Intensional Logic as so-called constructions. These are abstract, algorithmically structured procedures whose constituents are sub-procedures. My main thesis is that constructions are the vehicles of information. Hence, although analytically true sentences provide no empirical information about the state of the world, they convey analytic information, in the shape of constructions prescribing how to arrive at the truths in question. Moreover, even though analytically equivalent sentences have equal empirical content, their analytic content may be different. Finally, though the empirical content of the conclusion of a valid argument is contained in the premises, its analytic content may be different from the analytic content of the premises and thus convey a new piece of information. (shrink)
Kirsten Besheer has recently considered Descartes’ doubting appropriately in the context of his physiological theories in the spirit of recent important re-appraisals of his natural philosophy. However, Besheer does not address the notorious indubitability and its source that Descartes claims to have discovered. David Cunning has remarked that Descartes’ insistence on the indubitability of his existence presents “an intractable problem of interpretation” in the light of passages that suggest his existence is “just as dubitable as anything else”. However, although the (...) cogito argument is widely thought to be central to the force of Descartes’ indubitability, for his part, Cunning does not consider its relevance and force. Accordingly, this article is concerned with the cogito argument and the question central to Hintikka’s seminal contribution, described by Cottingham as “Perhaps the most debated question,” namely, whether or not the cogito can be construed as a logical inference. Clearly, an inferential account has the potential to explain the certainty of Descartes’ conclusion that he exists. Recently, Sarkar offers what he characterizes as “novel and fairly conclusive reasons why the cogito cannot be construed as an argument,” asserting “the discovery of the cogito can only be an intuition not a deduction.” Obviously, it would greatly support the opposing inferential construal if a remotely plausible logical argument could be proposed. Toward this end, I defend the virtues of my ‘Diagonal’ account of Descartes’ cogito Above all, I show how my analysis meets the requirement that any satisfactory solution to the problem of the cogito would reconcile Descartes’ claim that the cogito is a certain inference with his claim that it is an intuitive kind of knowledge. Through a critical discussion of analyses such as that of Gallois , I show that it is possible to provide a textually faithful analysis that permits seeing the cogito as both inference and intuition because it may be seen as an exercise in the mathematical method of Analysis. Above all, as Feldman requires, I show that the Diagonal account is not only textually elegant, but permits crediting Descartes with a worthy insight, thereby resolving the tension between what Howell has termed the Humean and Cartesian problems, namely, the elusiveness and the certainty of the self. (shrink)
I advance a pragmatic account of begging the question according to which a use of an argument begs the question just in case it is used as a statement of inference and it fails to state an inference the arguer or an addressee can perform given what they explicitly believe. Accordingly, what begs questions are uses of arguments as statements of inference, and the root cause of begging the question is an argument’s failure to state an (...) class='Hi'>inference performable by the reasoners the arguer targets. In these ways, my account is distinguished from other pragmatic accounts. By taking the defect of a question-begging use of an argument to be its failure to state its purported inference, my account highlights in a unique way why question-begging is not an epistemic defect, and why it is not a fallacy, understood as a mistake in reasoning. These points have been made elsewhere, but I believe that their plausibility is enhanced by considering begging the question as nullifying the role of an argument as a statement of inference. Since question-begging uses of arguments fail to state their purported inferences, using an argument in a question-begging-way is not a ratiocinative mistake. This undermines accounts of begging the question that adopt an epistemic approach. (shrink)
This article considers the prospects of inference to the best explanation as a method of confirming causal claims vis-à-vis the medical evidence of mechanisms. I show that IBE is actually descriptive of how scientists reason when choosing among hypotheses, that it is amenable to the balance/weight distinction, a pivotal pair of concepts in the philosophy of evidence, and that it can do justice to interesting features of the interplay between mechanistic and population level assessments.
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing (...) an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. (shrink)
I respond to the bad lot argument in the context of biological systematics. The response relies on the historical nature of biological systematics and on the availability of pattern explanations. The basic assumption of common descent enables systematic methodology to naturally generate candidate explanatory hypotheses. However, systematists face a related challenge in the issue of character analysis. Character analysis is the central problem for contemporary systematics, yet the general problem of which it is a case—what counts as evidence?—has not been (...) adequately discussed by proponents of inference to the best explanation. Facing this problem is the price of adopting abductive methods. I sketch an account of how systematists approach the problem of evidence. (shrink)
One of the most important topics in current work on consciousness is what relationship it has to attention. Recently, one of the focuses of this debate has been on the phenomenon of identity crowding. Ned Block has claimed that identity crowding involves conscious perception of an object that we are unable to pay attention to. In this article, we draw upon a range of empirical findings to argue against Block's interpretation of the data. We also argue that current empirical evidence (...) strongly supports an interpretation of the data that emphasises cognitive inference over conscious perception. (shrink)
Abstract In his Aquinas Lecture 1992 at Marquette University, Ernan McMullin discusses whether there is a pattern of inference that particularly characterizes the sciences of nature. He pursues this theme both on a historical and a systematic level. There is a continuity of concern across the ages that separate the Greek inquiry into nature from our own vastly more complex scientific enterprise. But there is also discontinuity, the abandonment of earlier ideals as unworkable. The natural sciences involve many types (...) of inference; three of these interlock in a special way to produce “retroductive inference,” the kind of complex inference that supports causal theory. (shrink)
This book deals with a neglected episode in the history of logic and theories of cognition: the way in which conceptions of inference changed during the seventeenth century. The author focuses on the work of Descartes, contrasting his construal of inference as an instantaneous grasp in accord with the natural light of reason, with the Aristotelian view of inference as a discursive process. Gaukroger offers a new interpretation of Descartes`s contribution to the question, revealing it to be (...) a significant advance over humanist and late Scholastic conceptions. He argues that Descartes's account played a pivotal role in the development of our understanding of the nature of inference. (shrink)
In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them (...) is the realization that behind every substantive model there is a statistical model that pertains exclusively to the probabilistic assumptions imposed on the data. It is not that the methodology determines whether to be a realist about entities and processes in a substantive field. It is rather that the substantive and statistical models refer to different entities and processes, and therefore call for different criteria of adequacy. (shrink)
The problem analysed in this paper is whether we can gain knowledge by using valid inferences, and how we can explain this process from a model-theoretic perspective. According to the paradox of inference (Cohen & Nagel 1936/1998, 173), it is logically impossible for an inference to be both valid and its conclusion to possess novelty with respect to the premises. I argue in this paper that valid inference has an epistemic significance, i.e., it can be used by (...) an agent to enlarge his knowledge, and this significance can be accounted in model-theoretic terms. I will argue first that the paradox is based on an equivocation, namely, it arises because logical containment, i.e., logical implication, is identified with epistemological containment, i.e., the knowledge of the premises entails the knowledge of the conclusion. Second, I will argue that a truth-conditional theory of meaning has the necessary resources to explain the epistemic significance of valid inferences. I will explain this epistemic significance starting from Carnap’s semantic theory of meaning and Tarski’s notion of satisfaction. In this way I will counter (Prawitz 2012b)’s claim that a truth-conditional theory of meaning is not able to account the legitimacy of valid inferences, i.e., their epistemic significance. (shrink)
Robert Pargetter has argued that we know other minds through an inference to the best explanation. My aim is to show, by criticising Pargetter's account, that this approach to the problem of other minds cannot, as it stands, deliver the goods; it might be part of the right response to the problem, but it cannot be the whole story. More precisely, I will claim that Pargetter does not successfully reconstruct how ordinary people in everyday life come reasonably to believe (...) in other minds, given only the gross behavioural evidence actually available to them. I will suggest, contrary to both Pargetter in particular and this approach in general, that reference to one's own case does, after all, play an indispensable evidential role in the justification of belief in other minds, something which obviously marks an important disanalogy between the case of other minds and that of such theoretical entities as electrons. (shrink)
It has recently been argued that inference essentially involves the thinker taking his premises to support his conclusion and drawing his conclusion because of this fact. However, this Taking Condition has also been criticized: If taking is interpreted as believing, it seems to lead to a vicious regress and to overintellectualize the act of inferring. In this paper, I examine and reject various attempts to salvage the Taking Condition, either by interpreting inferring as a kind of rule-following, or by (...) finding an innocuous role for the taking-belief. Finally, I propose an alternative account of taking, according to which it is not a separate belief, but rather an aspect of the attitude of believing: Believing that p implies not only taking p to be true and taking oneself to believe that p, but also taking one's reasons q to support p, when the belief in question is held on account of an inference. (shrink)
In his seminal Inference to the Best Explanation, Peter Lipton adopted a causal view of explanation and a broadly Millian view of how causal knowledge is obtained. This made his account vulnerable to critics who charged that Inference to the Best Explanation is merely a dressed-up version of Mill’s methods, which in the critics’ view do the real inductive work. Lipton advanced two arguments to protect Inference to the Best Explanation against this line of criticism: the problem (...) of multiple differences and the problem of inferred differences. Lipton claimed that these two problems show Mill’s method of difference to be largely unworkable unless it is embedded in an explanationist framework. Here I consider both arguments as well as the best Millian defense against them. Since the existing Millian defense is only partially successful, I will develop a new and improved account. As an integral part of the argument, I show that my solutions to the problems of multiple and inferred differences offer new insight into Lipton’s main case study: Ignaz Semmelweis’s discovery of the cause of childbed fever. I conclude that the method of difference can overcome Lipton’s challenges outside an explanationist framework. (shrink)
Do accounts of scientific theory formation and revision have implications for theories of everyday cognition? We maintain that failing to distinguish between importantly different types of theories of scientific inference has led to fundamental misunderstandings of the relationship between science and everyday cognition. In this article, we focus on one influential manifestation of this phenomenon which is found in Fodor's well-known critique of theories of cognitive architecture. We argue that in developing his critique, Fodor confounds a variety of distinct (...) claims about the holistic nature of scientific inference. Having done so, we outline more promising relations that hold between theories of scientific inference and ordinary cognition. (shrink)
The three main approaches in statistical inference—classical statistics, Bayesian and likelihood—are in current use in phylogeny research. The three approaches are discussed and compared, with particular emphasis on theoretical properties illustrated by simple thought-experiments. The methods are problematic on axiomatic grounds, extra-mathematical grounds relating to the use of a prior or practical grounds. This essay aims to increase understanding of these limits among those with an interest in phylogeny.