This book is addresses a topic that has received little or no attention in orthodox epistemology. Typical epistemological investigation focuses almost exclusively on knowledge, where knowing that something is the case importantly implies that what is believed is strictly true. This condition on knowledge is known as factivity and it is, to be sure, a bit of epistemological orthodoxy. So, if a belief is to qualify as knowledge according to the orthodox view it cannot be false. There is also an (...) increasingly influential group of epistemologists who argue that one ought to act only on what one knows, because truth of belief is the surest way to guarantee that our actions work out as planned. They defend what is known as the knowledge norm for action. This view is typically justified in virtue of the idea that successful intentional action should stem from knowledge because knowledge is a factive propositional attitude and it is, as a result, success-prone with respect to intentional action. In other words, true beliefs that constitute knowledge are the rationally normative standard for successful acting. But, there are clearly multitudes of cases where epistemic agents operate successfully and even rationally on the basis of beliefs that are false. That this is the case is not especially controversial. Sometimes false beliefs facilitate successful action. Of course, this can be because the agent is simply lucky. However, there is something particularly important about some false beliefs that relates to successful action but not in virtue of luck. While not strictly true, the beliefs in question are close to the truth or approximately true. This gives rise to the possibility that there are knowledge-like states that play roles very similar to knowledge but which are only quasi-factive. That is to say such states imply approximate truth and do not imply strict truth. Moreover, often such beliefs facilitate successful action in virtue of their being approximately true and this suggests a much more plausible norm for successful action that encompasses both cases of rational action guided by such quasi-factive states as well as cases of rational action guided by knowledge. The thesis of this book is that quasi-factive knowledge-like states are far more common that epistemologists have acknowledge and the book introduces a theory of such states and how they give rise to a much more reasonable account of the norms for action. This involves some tricky issues concerning approximate truth, the rationality of believing not strictly true claims, the justification of approximately true beliefs, the nature of false but approximately true evidence, the norms of belief, knowledge and quasi-knowledge, etc. (shrink)
Experimental philosophy is one of the most controversial and potentially revolutionary areas of philosophical research today. X-Phi, as it is known by many of its practitioners, questions many basic concepts regarding human intuitions—concepts which have guided centuries of modern philosophers. In their place, x-phi steers philosophical research back to scientific investigations in order to better understand human intuitions, using research techniques borrowed from current research in psychology and neuroscience. While scholars debate whether experimental philosophy signals a sea change or is (...) merely a faddish detour, no existing book looks at the X-Phi movement in reference to its methodology. In _The Experimental Turn and the Methods of Philosophy_, Michael J.Shaffer addresses this need, suggesting that the significance of experimental philosophy can best be assessed and understood in methodological terms. By comparing and contrasting traditional views of philosophical methodology with those of experimental philosophy, Shaffer traces the roots of the movement to Quinean naturalism and also demonstrates the deep, revolutionary significance of the experimental turn. (shrink)
La présente contribution explore la manière dont se pose la question du sens de l’être dans l’oeuvre de P. Ricoeur. Johann Michel montre que l’ontologie herméneutique ricoeurienne se présente comme fragmentée, disséminée dans des ouvrages épars sans jamais s’ériger dans un système clos et achevé. À travers ces fragments d’ontologie, J. Michel se risque cependant à dégager deux trames, l’une qui prend sa source dans La métaphore vive, l’autre qui trouve son point culminant dans Soi-même comme un autre. (...) Bien qu’ayant leur topos propre, chacune de ces trames ontologiques converge vers le même style de la «voie longue de l’herméneutique». (shrink)
L'œuvre de Michel Henry aura été de « recueillir » et développer le côté affectif de la phénoménologie, que Husserl, dans un passage clé de ses « Idées directrices », a délaissé au profit du côté intentionnel de la subjectivité - tout comme trois siècles plus tôt Descartes avait « recueilli » la subjectivité écartée par Galitée. Le texte qui suit compare le concept de soubassement, ou âme, chez Husserl avec celui de chair chez Michel Henry. On verra (...) que les deux philosophes parlent d'une couche sensible, qu'on peut d'une certaine manière situer entre corps et esprit, de deux points de vue diamétralement opposés. Au-delà de l'opposition entre les deux philosophies, cette recherche mène à une interprétation topologique du rapport ente corps, âme et esprit, tel que suggéré par le terme husserlien de soubassement, et propose un éclairage différent de la notion henryenne d'auto-affection. (shrink)
This paper shows how the availability heuristic can be used to justify inference to the best explanation in such a way that van Fraassen's infamous "best of a bad lot" objection can be adroitly avoided. With this end in mind, a dynamic and contextual version of the erotetic model of explanation sufficient to ground this response is presented and defended.
This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it is still (...) possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs. (shrink)
In this paper significant challenges are raised with respect to the view that explanation essentially involves unification. These objections are raised specifically with respect to the well-known versions of unificationism developed and defended by Michael Friedman and Philip Kitcher. The objections involve the explanatory regress argument and the concepts of reduction and scientific understanding. Essentially, the contention made here is that these versions of unificationism wrongly assume that reduction secures understanding.
Hans Reichenbach’s pragmatic treatment of the problem of induction in his later works on inductive inference was, and still is, of great interest. However, it has been dismissed as a pseudo-solution and it has been regarded as problematically obscure. This is, in large part, due to the difficulty in understanding exactly what Reichenbach’s solution is supposed to amount to, especially as it appears to offer no response to the inductive skeptic. For entirely different reasons, the significance of Bertrand Russell’s classic (...) attempt to solve Hume’s problem is also both obscure and controversial. Russell accepted that Hume’s reasoning about induction was basically correct, but he argued that given the centrality of induction in our cognitive endeavors something must be wrong with Hume’s basic assumptions. What Russell effectively identified as Hume’s (and Reichenbach’s) failure was the commitment to a purely extensional empiricism. So, Russell’s solution to the problem of induction was to concede extensional empiricism and to accept that induction is grounded by accepting both a robust essentialism and a form of rationalism that allowed for a priori knowledge of universals. So, neither of those doctrines is without its critics. On the one hand, Reichenbach’s solution faces the charges of obscurity and of offering no response to the inductive skeptic. On the other hand, Russell’s solution looks to be objectionably ad hoc absent some non-controversial and independent argument that the universals that are necessary to ground the uniformity of nature actually exist and are knowable. This particular charge is especially likely to arise from those inclined towards purely extensional forms of empiricism. In this paper the significance of Reichenbach’s solution to the problem of induction will be made clearer via the comparison of these two historically important views about the problem of induction. The modest but important contention that will be made here is that the comparison of Reichenbach’s and Russell’s solutions calls attention to the opposition between extensional and intensional metaphysical presuppositions in the context of attempts to solve the problem of induction. It will be show that, in effect, what Reichenbach does is to establish an important epistemic limitation of extensional empiricism. So, it will be argued here that there is nothing really obscure about Reichenbach’s thoughts on induction at all. He was simply working out the limits of extensional empiricism with respect to inductive inference in opposition to the sort of metaphysics favored by Russell and like-minded thinkers. (shrink)
In this article the standard philosophical method involving intuition-driven conceptual analysis is challenged in a new way. This orthodox approach to philosophy takes analysanda to be the specifications of the content of concepts in the form of sets of necessary and sufficient conditions. Here it is argued that there is no adequate account of what necessary and sufficient conditions are. So, the targets of applications of the standard philosophical method so understood are not sufficiently well understood for this method to (...) be dependable. (shrink)
This book is a sustained defense of the compatibility of the presence of idealizations in the sciences and scientific realism. So, the book is essentially a detailed response to the infamous arguments raised by Nancy Cartwright to the effect that idealization and scientific realism are incompatible.
It is an under-appreciated fact that Quine's rejection of the analytic/synthetic distinction, when coupled with some other plausible and related views, implies that there are serious difficulties in demarcating empirical theories from pure mathematical theories within the Quinean framework. This is a serious problem because there seems to be a principled difference between the two disciplines that cannot apparently be captured in the orthodox Quienan framework. For the purpose of simplicity let us call this Quine's problem of demarcation. In this (...) paper this problem will be articulated and it will be shown that the typical sorts of responses to this problem are all unworkable within the Quinean framework. It will then be shown that the lack of resources to solve this problem within the Quinean framework implies that Quine’s version of the indispensability argument cannot get off the ground, for it presupposes the possibility of making such a distinction. (shrink)
Paradoxes have played an important role both in philosophy and in mathematics and paradox resolution is an important topic in both fields. Paradox resolution is deeply important because if such resolution cannot be achieved, we are threatened with the charge of debilitating irrationality. This is supposed to be the case for the following reason. Paradoxes consist of jointly contradictory sets of statements that are individually plausible or believable. These facts about paradoxes then give rise to a deeply troubling epistemic problem. (...) Specifically, if one believes all of the constitutive propositions that make up a paradox, then one is apparently committed to belief in every proposition. This is the result of the principle of classical logical known as ex contradictione (sequitur) quodlibetthat anything and everything follows from a contradiction, and the plausible idea that belief is closed under logical or material implication (i.e. the epistemic closure principle). But, it is manifestly and profoundly irrational to believe every proposition and so the presence of even one contradiction in one’s doxa appears to result in what seems to be total irrationality. This problem is the problem of paradox-induced explosion. In this paper it will be argued that in many cases this problem can plausibly be avoided in a purely epistemic manner, without having either to resort to non-classical logics for belief (e.g. paraconsistent logics) or to the denial of the standard closure principle for beliefs. The manner in which this result can be achieved depends on drawing an important distinction between the propositional attitude of belief and the weaker attitude of acceptance such that paradox constituting propositions are accepted but not believed. Paradox-induced explosion is then avoided by noting that while belief may well be closed under material implication or even under logical implication, these sorts of weaker commitments are not subject to closure principles of those sorts. So, this possibility provides us with a less radical way to deal with the existence of paradoxes and it preserves the idea that intelligent agents can actually entertain paradoxes. (shrink)
In this chapter we consider three philosophical perspectives (including those of Stalnaker and Lewis) on the question of whether and how the principle of conditional excluded middle should figure in the logic and semantics of counterfactuals. We articulate and defend a third view that is patterned after belief revision theories offered in other areas of logic and philosophy. Unlike Lewis’ view, the belief revision perspective does not reject conditional excluded middle, and unlike Stalnaker’s, it does not embrace supervaluationism. We adduce (...) both theoretical and empirical considerations to argue that the belief revision perspective should be preferred to its alternatives. The empirical considerations are drawn from the results of four empirical studies (which we report below) of non-experts’ judgments about counterfactuals and conditional excluded middle. (shrink)
Imre Lakatos' views on the philosophy of mathematics are important and they have often been underappreciated. The most obvious lacuna in this respect is the lack of detailed discussion and analysis of his 1976a paper and its implications for the methodology of mathematics, particularly its implications with respect to argumentation and the matter of how truths are established in mathematics. The most important themes that run through his work on the philosophy of mathematics and which culminate in the 1976a paper (...) are (1) the (quasi-)empirical character of mathematics and (2) the rejection of axiomatic deductivism as the basis of mathematical knowledge. In this paper Lakatos' later views on the quasi-empirical nature of mathematical theories and methodology are examined and specific attention is paid to what this view implies about the nature of mathematical argumentation and its relation to the empirical sciences. (shrink)
Stalnaker argued that conditional excluded middle should be included in the principles that govern counterfactuals on the basis that intuitions support that principle. This is because there are pairs of competing counterfactuals that appear to be equally acceptable. In doing so, he was forced to introduced semantic vagueness into his system of counterfactuals. In this paper it is argued that there is a simpler and purely epistemic explanation of these cases that avoids the need for introducing semantic vagueness into the (...) semantics for counterfactuals. (shrink)
Initial responses to questionnaires used to assess participants' understanding of informed consent for malaria vaccine trials conducted in the United States and Mali were tallied. Total scores were analyzed by age, sex, literacy (if known), and location. Ninety-two percent (92%) of answers by United States participants and 85% of answers by Malian participants were correct. Questions more likely to be answered incorrectly in Mali related to risk, and to the type of vaccine. For adult participants, independent predictors of higher scores (...) were younger age and female sex in the United States, and male sex in Mali. Scores in the United States were higher than in Mali (P = 0.005). Despite this difference participants at both sites were well informed overall. Although interpretation must be qualified because questionnaires were not intended as research tools and were not standardized among sites, these results do not support concerns about systematic low understanding among research participants in developing versus developed countries. (shrink)
The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning the possession (...) of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)
This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...) dominance principle. (shrink)
This paper introduces a new argument for the safety condition on knowledge. It is based on the contention that the rejection of safety entails the rejection of the factivity condition on knowledge. But, since we should maintain factivity, we should endorse safery.
This paper introduces a new argument against Richard Foley’s threshold view of belief. His view is based on the Lockean Thesis (LT) and the Rational Threshold Thesis (RTT). The argument introduced here shows that the views derived from the LT and the RTT violate the safety condition on knowledge in way that threatens the LT and/or the RTT.
In a recent revision (chapter 4 of Nowakowa and Nowak 2000) of an older article Leszek Nowak (1992) has attempted to rebut Niiniluoto’s 1990 critical suggestion that proponents of the Poznań idealizational approach to the sciences have committed a rather elementary logical error in the formal machinery that they advocate for use in the analysis of scientific methodology. In this paper I criticize Nowak’s responses to Niiniluoto’s suggestion, and, subsequently, work out some of the consequences of that criticism for understanding (...) the role that idealization plays in scientific methodology. (shrink)
In a recent article, Peter Gärdenfors (1992) has suggested that the AGM (Alchourrón, Gärdenfors, and Makinson) theory of belief revision can be given an epistemic basis by interpreting the revision postulates of that theory in terms of a version of the coherence theory of justification. To accomplish this goal Gärdenfors suggests that the AGM revision postulates concerning the conservative nature of belief revision can be interpreted in terms of a concept of epistemic entrenchment and that there are good empirical reasons (...) to adopt this view as opposed to some form of foundationalist account of the justification of our beliefs. In this paper I argue that Gärdenfors’ attempt to underwrite the AGM theory of belief revision by appealing to a form of coherentism is seriously inadequate for several reasons. (shrink)
Recently a number of variously motivated epistemologists have argued that knowledge is closely tied to practical matters. On the one hand, radical pragmatic encroachment is the view that facts about whether an agent has knowledge depend on practical factors and this is coupled to the view that there is an important connection between knowledge and action. On the other hand, one can argue for the less radical thesis only that there is an important connection between knowledge and practical reasoning. So, (...) defenders of both of these views endorse the view that knowledge is the norm of practical reasoning. This thesis has recently come under heavy fire and a number of weaker proposals have been defended. In this paper counter-examples to the knowledge norm of reasoning will be presented and it will be argued that this viewand a number of related but weaker viewscannot be sustained in the face of these counter-examples. The paper concludes with a novel proposal concerning the norm of practical reasoning that is immune to the counter-examples introduced here. (shrink)
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
Recently Timothy Williamson (2007) has argued that characterizations of the standard (i.e. intuition-based) philosophical practice of philosophical analysis are misguided because of the erroneous manner in which this practice has been understood. In doing so he implies that experimental critiques of the reliability of intuition are based on this misunderstanding of philosophical methodology and so have little or no bearing on actual philosophical practice or results. His main point is that the orthodox understanding of philosophical methodology is incorrect in that (...) it treats philosophical thought experiments in such a way that they can be “filled in” in various ways that undermines their use as counter-examples and that intuition plays no substantial role in philosophical practice when we properly understand that methodology as a result of the possibility of such filling in. In this paper Williamson’s claim that philosophical thought experiments cases can be legitimately filled in this way will be challenged and it will be shown that the experimental critique of the intuition-based methods involved a serious issue. (shrink)
In the preface paradox the posited author is supposed to know both that every sentence in a book is true and that not every sentence in that book is true. But, this result is paradoxically contradictory. The paradoxicality exhibited in such cases arises chiefly out of the recognition that large-scale and difficult tasks like verifying the truth of large sets of sentences typically involve errors even given our best efforts to be epistemically diligent. This paper introduces an argument designed to (...) resolve the preface paradox so understood by appeal to the safety condition on knowledge. (shrink)
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
This paper presents a case for the claim that the infamous miners paradox is not a paradox. This contention is based on some important observations about the nature of ignorance with respect to both disjunctions and conditional obligations and their modal features. The gist of the argument is that given the uncertainty about the location of the miners in the story and the nature of obligations, the apparent obligation to block either mine shaft is cancelled.
Defenders of doxastic voluntarism accept that we can voluntarily commit ourselves to propositions, including belief-contravening propositions. Thus, defenders of doxastic voluntarism allow that we can choose to believe propositions that are negatively implicated by our evidence. In this paper it is argued that the conjunction of epistemic deontology and doxastic voluntarism as it applies to ordinary cases of belief-contravening propositional commitments is incompatible with evidentialism. In this paper ED and DV will be assumed and this negative result will be used (...) to suggest that voluntary belief-contravening commitments are not themselves beliefs and that these sorts of commitments are not governed by evidentialism. So, the apparent incompatibility of the package views noted above can be resolved without ceding evidentialism with respect to beliefs. (shrink)
In this paper we argue that dissociative identity disorder (DID) is best interpreted as a causal model of a (possible) post-traumatic psychological process, as a mechanical model of an abnormal psychological condition. From this perspective we examine and criticize the evidential status of DID, and we demonstrate that there is really no good reason to believe that anyone has ever suffered from DID so understood. This is so because the proponents of DID violate basic methodological principles of good causal modeling. (...) When every ounce of your concentration is fixed upon blasting a winged pig out of the sky, you do not question its species' ontological status. James Morrow, City of Truth (1990). (shrink)
In this paper it is argued that the conjunction of linguistic ersatzism, the ontologically deflationary view that possible worlds are maximal and consistent sets of sentences, and possible world semantics, the view that the meaning of a sentence is the set of possible worlds at which it is true, implies that no actual speaker can effectively use virtually any language to successfully communicate information. This result is based on complexity issues that relate to our finite computational ability to deal with (...) large bodies of information and a strong, but well motivated, assumption about the cognitive accessibility of meanings of sentences ersatzers seem to be implicitly committed to. It follows that linguistic ersatzism, possible world semantics, or both must be rejected. (shrink)
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of Bayesian (...) inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
The development of possible worlds semantics for modal claims has led to a more general application of that theory as a complete semantics for various formal and natural languages, and this view is widely held to be an adequate (philosophical) interpretation of the model theory for such languages. We argue here that this view generates a self-referential inconsistency that indicates either the falsity or the incompleteness of PWS.
Some recent work by philosophers of mathematics has been aimed at showing that our knowledge of the existence of at least some mathematical objects and/or sets can be epistemically grounded by appealing to perceptual experience. The sensory capacity that they refer to in doing so is the ability to perceive numbers, mathematical properties and/or sets. The chief defense of this view as it applies to the perception of sets is found in Penelope Maddy’s Realism in Mathematics, but a number of (...) other philosophers have made similar, if more simple, appeals of this sort. For example, Jaegwon Kim, John Bigelow, and John Bigelow and Robert Pargetter have all defended such views. The main critical issue that will be raised here concerns the coherence of the notions of set perception and mathematical perception, and whether appeals to such perceptual faculties can really provide any justification for or explanation of belief in the existence of sets, mathematical properties and/or numbers. (shrink)
In this paper Timothy Williamson’s argument that the knowledge norm of assertion is the best explanation of the unassertability of Morrean sentences is challenged and an alternative account of the norm of assertion is defended.
In this paper I argue that Tyler Burge's non-reductive view of testiomonial knowledge cannot adeqautrely discriminate between fallacious ad vericumdium appeals to expet testimony and legitimate appeals to authority.
Theorists in various scientific disciplines offer radically different accounts of the origin of violent behavior in humans, but it is not clear how the study of violence is to be scientifically grounded. This problem is made more complicated because both what sorts of acts constitute violence and what needs to be appealed to in explaining violence differs according to social scientists, biologists, anthropologists and neurophysiologists, and this generates serious problems with respect to even attempting to ascertain the differential bona fides (...) of these various explanatory programs. As a consequence, there is little theoretical reason to suspect that efforts to prevent violence will have any appreciable effect. In this paper we investigate the general issue of whether any of the general theoretical approaches to violent behavior can reasonably be taken to be the best approach to the explanation of seriously violent behavior. Our more specific aim is to examine the controversial explanation of violent behavior offered by Lonnie Athens in order to ascertain whether it can be seriously considered to be the best explanation of violent behavior. (shrink)
In his 1993 article George Bealer offers three separate arguments that are directed against the internal coherence of empiricism, specifically against Quine’s version of empiricism. One of these arguments is the starting points argument (SPA) and it is supposed to show that Quinean empiricism is incoherent. We argue here that this argument is deeply flawed, and we demonstrate how a Quinean may successfully defend his views against Bealer’s SPA. Our defense of Quinean empiricism against the SPA depends on showing (1) (...) that Bealer is, in an important sense, a foundationalist, and (2) that Quine is, in an important sense, a coherentist. Having established these two contentions we show that Bealer’s SPA begs the question against Quinean empiricists. (shrink)