This paper contains a critical examination of the prospects for analyses of knowledge that weaken the factivity condition such that knowledge implies approximate truth.
Van Fraassen’s (1989) infamous best of a bad lot objection is widely taken to be the most serious problem that afflicts theories of inference to the best explanation (IBE), for it alleges to show that we should not accept the conclusion of any case of such reasoning as it actually proceeds. Moreover, this is supposed to be the case irrespective of the details of the particular criteria used to select best explanations. The best of a bad lot objection is predicated (...) on, and really only requires, the idea that in any real case of IBE where one hypothesis is favored as best over those with which it competes, it is always the case that it is more likely that the true explanation is to be found in the set of unformulated and unconsidered logical alternatives to the set of actually considered hypotheses. On this basis, Van Fraassen believes that accepting the conclusion of IBEs so understood is irrational and this is simply because such inferences are supposedly not probative. In this paper the best of a bad lot objection will be addressed and it will be shown that Van Fraassen’s notorious criticism of IBE depends on a problematic conflation of two notions of rationality and thus that his criticism of IBE involves a damning equivocation. In essence, he conflates ideal standards of rationality with epistemic standards of rationality and, in so doing, makes it appear to be the case that we should not accept the conclusions of IBEs. But, when we disambiguate the concepts of rationality at work in the argument Van Fraassen’s conclusion simply does not follow. (shrink)
In this paper significant challenges are raised with respect to the view that explanation essentially involves unification. These objections are raised specifically with respect to the well-known versions of unificationism developed and defended by Michael Friedman and Philip Kitcher. The objections involve the explanatory regress argument and the concepts of reduction and scientific understanding. Essentially, the contention made here is that these versions of unificationism wrongly assume that reduction secures understanding.
This book is a sustained defense of the compatibility of the presence of idealizations in the sciences and scientific realism. So, the book is essentially a detailed response to the infamous arguments raised by Nancy Cartwright to the effect that idealization and scientific realism are incompatible.
This paper shows how the availability heuristic can be used to justify inference to the best explanation in such a way that van Fraassen's infamous "best of a bad lot" objection can be adroitly avoided. With this end in mind, a dynamic and contextual version of the erotetic model of explanation sufficient to ground this response is presented and defended.
The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning the possession (...) of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)
This paper critically explores Timothy Williamson’s view of evidence, and it does so in light of the problem of epistemic luck. Williamson’s view of evidence is, of course, a crucially important aspect of his novel and influential “knowledge-first” epistemological project. Notoriously, one crucial thesis of this project is that one’s evidence is equivalent to what one knows. This has come to be known as the E = K thesis. This paper specifically addresses Williamson’s knowledge-first epistemology and the E = K (...) thesis in the context of anti-luck epistemology and the idea that knowledge is factive. Williamson’s views on these matters are worth investigating in some detail because he subscribes to a well-worked out anti-luck view of knowledge that incorporates what is perhaps the most common anti-luck condition. But this paper is also of more general importance because the critique of Williamson’s views on these matters reveals some important things about the nature of evidence and evidence is one of the most fundamental concepts in epistemology. (shrink)
This paper addresses the recent rise of the use of alternative medicine in Western countries and it offers a novel explanation of that phenomenon in terms of cognitive and economic factors related to the free-market and patient-centric approach to medicine that is currently in place in those countries, in contrast to some alternative explanations of this phenomenon. Moreover, the paper addresses this troubling trend in terms of the serious harms associated with the use of alternative medical modalities. The explanatory theory (...) defended here is then predicated on the idea that an extreme patient-centric model of medical practice that treats largely ignorant patients as consumers of medical products and services endowed with an essentially unrestricted power of freedom to choose treatments predictably leads serious and avoidable harms. Some important moral and epistemological consequences of this model are then articulated and corrective measures are suggested. (shrink)
This paper introduces a new argument for the safety condition on knowledge. It is based on the contention that the rejection of safety entails the rejection of the factivity condition on knowledge. But, since we should maintain factivity, we should endorse safery.
This paper introduces two new paradoxes for standard deontic logic (SDL). They are importantly related to, but distinct from Ross' paradox. These two new paradoxes for SDL are the simple weakening paradox and the complex weakening paradox. Both of these paradoxes arise in virtue of the underlaying logic of SDL and are consequences of the fact that SDL incorporates the principle known as weakening. These two paradoxes then show that SDL has counter-intuitive implications related to disjunctive obligations that arise in (...) virtue of deontic weakening and in virtue of decisions concerning how to discharge such disjunctive obligations. The main result here is then that theorem T1 is a problematic component of SDL that needs to be addressed. (shrink)
Hans Reichenbach’s pragmatic treatment of the problem of induction in his later works on inductive inference was, and still is, of great interest. However, it has been dismissed as a pseudo-solution and it has been regarded as problematically obscure. This is, in large part, due to the difficulty in understanding exactly what Reichenbach’s solution is supposed to amount to, especially as it appears to offer no response to the inductive skeptic. For entirely different reasons, the significance of Bertrand Russell’s classic (...) attempt to solve Hume’s problem is also both obscure and controversial. Russell accepted that Hume’s reasoning about induction was basically correct, but he argued that given the centrality of induction in our cognitive endeavors something must be wrong with Hume’s basic assumptions. What Russell effectively identified as Hume’s (and Reichenbach’s) failure was the commitment to a purely extensional empiricism. So, Russell’s solution to the problem of induction was to concede extensional empiricism and to accept that induction is grounded by accepting both a robust essentialism and a form of rationalism that allowed for a priori knowledge of universals. So, neither of those doctrines is without its critics. On the one hand, Reichenbach’s solution faces the charges of obscurity and of offering no response to the inductive skeptic. On the other hand, Russell’s solution looks to be objectionably ad hoc absent some non-controversial and independent argument that the universals that are necessary to ground the uniformity of nature actually exist and are knowable. This particular charge is especially likely to arise from those inclined towards purely extensional forms of empiricism. In this paper the significance of Reichenbach’s solution to the problem of induction will be made clearer via the comparison of these two historically important views about the problem of induction. The modest but important contention that will be made here is that the comparison of Reichenbach’s and Russell’s solutions calls attention to the opposition between extensional and intensional metaphysical presuppositions in the context of attempts to solve the problem of induction. It will be show that, in effect, what Reichenbach does is to establish an important epistemic limitation of extensional empiricism. So, it will be argued here that there is nothing really obscure about Reichenbach’s thoughts on induction at all. He was simply working out the limits of extensional empiricism with respect to inductive inference in opposition to the sort of metaphysics favored by Russell and like-minded thinkers. (shrink)
Recently a number of variously motivated epistemologists have argued that knowledge is closely tied to practical matters. On the one hand, radical pragmatic encroachment is the view that facts about whether an agent has knowledge depend on practical factors and this is coupled to the view that there is an important connection between knowledge and action. On the other hand, one can argue for the less radical thesis only that there is an important connection between knowledge and practical reasoning. So, (...) defenders of both of these views endorse the view that knowledge is the norm of practical reasoning. This thesis has recently come under heavy fire and a number of weaker proposals have been defended. In this paper counter-examples to the knowledge norm of reasoning will be presented and it will be argued that this viewand a number of related but weaker viewscannot be sustained in the face of these counter-examples. The paper concludes with a novel proposal concerning the norm of practical reasoning that is immune to the counter-examples introduced here. (shrink)
This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it is still (...) possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs. (shrink)
It is wholly uncontroversial that measurements-or, more properly, propositions that are measurement reports-are often paradigmatically good cases of propositions that serve the function of evidence. In normal cases it is also obvious that stating such a report is an utterly pedestrian case of successful assertion. So, for example, there is nothing controversial about the following claims: (1) that a proposition to the effect that a particular thermometer reads 104C when properly used to determine the temperature of a particular patient is (...) evidence that the patient in question has a fever and (2) that there is nothing wrong with asserting the proposition that a particular thermometer reads 104C for appropriate reasons of communication, etc. when the thermometer has been properly used to determine the temperature of a particular patient. Here it will be shown that Timothy Williamson’s commitments to a number of principles about knowledge and assertion imply that a whole class of utterly ordinary statements like these that are used as evidence are not really evidence because they are not knowledge and so are (perversely) unassertable according to his principled commitments. This paper deals primarily with the second of these two problems and an alternative account of the norms of assertion is introduced which allows for the assertability of such measurement reports. (shrink)
In this chapter we consider three philosophical perspectives (including those of Stalnaker and Lewis) on the question of whether and how the principle of conditional excluded middle should figure in the logic and semantics of counterfactuals. We articulate and defend a third view that is patterned after belief revision theories offered in other areas of logic and philosophy. Unlike Lewis’ view, the belief revision perspective does not reject conditional excluded middle, and unlike Stalnaker’s, it does not embrace supervaluationism. We adduce (...) both theoretical and empirical considerations to argue that the belief revision perspective should be preferred to its alternatives. The empirical considerations are drawn from the results of four empirical studies (which we report below) of non-experts’ judgments about counterfactuals and conditional excluded middle. (shrink)
Paradoxes have played an important role both in philosophy and in mathematics and paradox resolution is an important topic in both fields. Paradox resolution is deeply important because if such resolution cannot be achieved, we are threatened with the charge of debilitating irrationality. This is supposed to be the case for the following reason. Paradoxes consist of jointly contradictory sets of statements that are individually plausible or believable. These facts about paradoxes then give rise to a deeply troubling epistemic problem. (...) Specifically, if one believes all of the constitutive propositions that make up a paradox, then one is apparently committed to belief in every proposition. This is the result of the principle of classical logical known as ex contradictione (sequitur) quodlibetthat anything and everything follows from a contradiction, and the plausible idea that belief is closed under logical or material implication (i.e. the epistemic closure principle). But, it is manifestly and profoundly irrational to believe every proposition and so the presence of even one contradiction in one’s doxa appears to result in what seems to be total irrationality. This problem is the problem of paradox-induced explosion. In this paper it will be argued that in many cases this problem can plausibly be avoided in a purely epistemic manner, without having either to resort to non-classical logics for belief (e.g. paraconsistent logics) or to the denial of the standard closure principle for beliefs. The manner in which this result can be achieved depends on drawing an important distinction between the propositional attitude of belief and the weaker attitude of acceptance such that paradox constituting propositions are accepted but not believed. Paradox-induced explosion is then avoided by noting that while belief may well be closed under material implication or even under logical implication, these sorts of weaker commitments are not subject to closure principles of those sorts. So, this possibility provides us with a less radical way to deal with the existence of paradoxes and it preserves the idea that intelligent agents can actually entertain paradoxes. (shrink)
In this paper I argue that Michael Friedman's conception of the contitutive a priori faces two serious problems. These two problems show that the view collapses into a form of conventionalism.
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
Searle’s Chinese Room Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, whatever significance (...) Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove. (shrink)
This paper challenges Williamson's "E = K" thesis on the basis of evidential practice. The main point is that most evidence is only approximately true and so cannot be known if knowledge is factive.
Defenders of doxastic voluntarism accept that we can voluntarily commit ourselves to propositions, including belief-contravening propositions. Thus, defenders of doxastic voluntarism allow that we can choose to believe propositions that are negatively implicated by our evidence. In this paper it is argued that the conjunction of epistemic deontology and doxastic voluntarism as it applies to ordinary cases of belief-contravening propositional commitments is incompatible with evidentialism. In this paper ED and DV will be assumed and this negative result will be used (...) to suggest that voluntary belief-contravening commitments are not themselves beliefs and that these sorts of commitments are not governed by evidentialism. So, the apparent incompatibility of the package views noted above can be resolved without ceding evidentialism with respect to beliefs. (shrink)
In this paper Timothy Williamson’s argument that the knowledge norm of assertion is the best explanation of the unassertability of Morrean sentences is challenged and an alternative account of the norm of assertion is defended.
This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...) dominance principle. (shrink)
In this paper a parallel is drawn between the problem of epistemic access to abstract objects in mathematics and the problem of epistemic access to idealized systems in the physical sciences. On this basis it is argued that some recent and more traditional approaches to solving these problems are problematic.
In this paper it is argued that three of the most prominent theories of conditional acceptance face very serious problems. David Lewis' concept of imaging, the Ramsey test and Jonathan Bennett's recent hybrid view all face viscious regresses, or they either employ unanalyzed components or depend upon an implausibly strong version of doxastic voluntarism.
The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are (...) incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesian decision theory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decision theory as a model of human or machine planning. (shrink)
In a recent revision (chapter 4 of Nowakowa and Nowak 2000) of an older article Leszek Nowak (1992) has attempted to rebut Niiniluoto’s 1990 critical suggestion that proponents of the Poznań idealizational approach to the sciences have committed a rather elementary logical error in the formal machinery that they advocate for use in the analysis of scientific methodology. In this paper I criticize Nowak’s responses to Niiniluoto’s suggestion, and, subsequently, work out some of the consequences of that criticism for understanding (...) the role that idealization plays in scientific methodology. (shrink)
In a recent article, Peter Gärdenfors (1992) has suggested that the AGM (Alchourrón, Gärdenfors, and Makinson) theory of belief revision can be given an epistemic basis by interpreting the revision postulates of that theory in terms of a version of the coherence theory of justification. To accomplish this goal Gärdenfors suggests that the AGM revision postulates concerning the conservative nature of belief revision can be interpreted in terms of a concept of epistemic entrenchment and that there are good empirical reasons (...) to adopt this view as opposed to some form of foundationalist account of the justification of our beliefs. In this paper I argue that Gärdenfors’ attempt to underwrite the AGM theory of belief revision by appealing to a form of coherentism is seriously inadequate for several reasons. (shrink)
Following Nancy Cartwright and others, I suggest that most theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logical form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, then posterior probabilities will be undefined, as the (...) latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
In the preface paradox the posited author is supposed to know both that every sentence in a book is true and that not every sentence in that book is true. But, this result is paradoxically contradictory. The paradoxicality exhibited in such cases arises chiefly out of the recognition that large-scale and difficult tasks like verifying the truth of large sets of sentences typically involve errors even given our best efforts to be epistemically diligent. This paper introduces an argument designed to (...) resolve the preface paradox so understood by appeal to the safety condition on knowledge. (shrink)
In this paper I argue that the best explanation of expertise about taste is that such alleged experts are simply more eloquent in describing the taste experiences that they have than are ordinary tasters.
This paper presents a corrected version of Pascal's wager that makes it consonant with modern decision theory. The corrected wager shows that not committing to God's existence is the rational choice.
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of Bayesian (...) inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
In this paper we argue that dissociative identity disorder (DID) is best interpreted as a causal model of a (possible) post-traumatic psychological process, as a mechanical model of an abnormal psychological condition. From this perspective we examine and criticize the evidential status of DID, and we demonstrate that there is really no good reason to believe that anyone has ever suffered from DID so understood. This is so because the proponents of DID violate basic methodological principles of good causal modeling. (...) When every ounce of your concentration is fixed upon blasting a winged pig out of the sky, you do not question its species' ontological status. James Morrow, City of Truth (1990). (shrink)
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
The development of possible worlds semantics for modal claims has led to a more general application of that theory as a complete semantics for various formal and natural languages, and this view is widely held to be an adequate (philosophical) interpretation of the model theory for such languages. We argue here that this view generates a self-referential inconsistency that indicates either the falsity or the incompleteness of PWS.
Imre Lakatos' views on the philosophy of mathematics are important and they have often been underappreciated. The most obvious lacuna in this respect is the lack of detailed discussion and analysis of his 1976a paper and its implications for the methodology of mathematics, particularly its implications with respect to argumentation and the matter of how truths are established in mathematics. The most important themes that run through his work on the philosophy of mathematics and which culminate in the 1976a paper (...) are (1) the (quasi-)empirical character of mathematics and (2) the rejection of axiomatic deductivism as the basis of mathematical knowledge. In this paper Lakatos' later views on the quasi-empirical nature of mathematical theories and methodology are examined and specific attention is paid to what this view implies about the nature of mathematical argumentation and its relation to the empirical sciences. (shrink)
Experimental philosophy is one of the most controversial and potentially revolutionary areas of philosophical research today. X-Phi, as it is known by many of its practitioners, questions many basic concepts regarding human intuitions—concepts which have guided centuries of modern philosophers. In their place, x-phi steers philosophical research back to scientific investigations in order to better understand human intuitions, using research techniques borrowed from current research in psychology and neuroscience. While scholars debate whether experimental philosophy signals a sea change or is (...) merely a faddish detour, no existing book looks at the X-Phi movement in reference to its methodology. In _The Experimental Turn and the Methods of Philosophy_, Michael J.Shaffer addresses this need, suggesting that the significance of experimental philosophy can best be assessed and understood in methodological terms. By comparing and contrasting traditional views of philosophical methodology with those of experimental philosophy, Shaffer traces the roots of the movement to Quinean naturalism and also demonstrates the deep, revolutionary significance of the experimental turn. (shrink)
This paper contains an argument to the effect that possible worlds semantics renders semantic knowledge impossible, no matter what ontological interpretation is given to possible worlds. The essential contention made is that possible worlds semantic knowledge is unsafe and this is shown by a parallel with the preface paradox.
In this article the standard philosophical method involving intuition-driven conceptual analysis is challenged in a new way. This orthodox approach to philosophy takes analysanda to be the specifications of the content of concepts in the form of sets of necessary and sufficient conditions. Here it is argued that there is no adequate account of what necessary and sufficient conditions are. So, the targets of applications of the standard philosophical method so understood are not sufficiently well understood for this method to (...) be dependable. (shrink)
It is an under-appreciated fact that Quine's rejection of the analytic/synthetic distinction, when coupled with some other plausible and related views, implies that there are serious difficulties in demarcating empirical theories from pure mathematical theories within the Quinean framework. This is a serious problem because there seems to be a principled difference between the two disciplines that cannot apparently be captured in the orthodox Quienan framework. For the purpose of simplicity let us call this Quine's problem of demarcation. In this (...) paper this problem will be articulated and it will be shown that the typical sorts of responses to this problem are all unworkable within the Quinean framework. It will then be shown that the lack of resources to solve this problem within the Quinean framework implies that Quine’s version of the indispensability argument cannot get off the ground, for it presupposes the possibility of making such a distinction. (shrink)
This paper introduces a new argument against Richard Foley’s threshold view of belief. His view is based on the Lockean Thesis (LT) and the Rational Threshold Thesis (RTT). The argument introduced here shows that the views derived from the LT and the RTT violate the safety condition on knowledge in way that threatens the LT and/or the RTT.
Stalnaker argued that conditional excluded middle should be included in the principles that govern counterfactuals on the basis that intuitions support that principle. This is because there are pairs of competing counterfactuals that appear to be equally acceptable. In doing so, he was forced to introduced semantic vagueness into his system of counterfactuals. In this paper it is argued that there is a simpler and purely epistemic explanation of these cases that avoids the need for introducing semantic vagueness into the (...) semantics for counterfactuals. (shrink)
This paper presents a case for the claim that the infamous miners paradox is not a paradox. This contention is based on some important observations about the nature of ignorance with respect to both disjunctions and conditional obligations and their modal features. The gist of the argument is that given the uncertainty about the location of the miners in the story and the nature of obligations, the apparent obligation to block either mine shaft is cancelled.
Recently Timothy Williamson (2007) has argued that characterizations of the standard (i.e. intuition-based) philosophical practice of philosophical analysis are misguided because of the erroneous manner in which this practice has been understood. In doing so he implies that experimental critiques of the reliability of intuition are based on this misunderstanding of philosophical methodology and so have little or no bearing on actual philosophical practice or results. His main point is that the orthodox understanding of philosophical methodology is incorrect in that (...) it treats philosophical thought experiments in such a way that they can be “filled in” in various ways that undermines their use as counter-examples and that intuition plays no substantial role in philosophical practice when we properly understand that methodology as a result of the possibility of such filling in. In this paper Williamson’s claim that philosophical thought experiments cases can be legitimately filled in this way will be challenged and it will be shown that the experimental critique of the intuition-based methods involved a serious issue. (shrink)