In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with (...) one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
Modal intuitions are the primary source of modal knowledge but also of modal error. According to the theory of modal error in this paper, modal intuitions retain their evidential force in spite of their fallibility, and erroneous modal intuitions are in principle identifiable and eliminable by subjecting our intuitions to a priori dialectic. After an inventory of standard sources of modal error, two further sources are examined in detail. The first source - namely, the failure to distinguish (...) between metaphysical possibility and various kinds of epistemic possibility - turns out to be comparatively easy to untangle and poses little threat to intuition-driven philosophical investigation. The second source is the local (i.e., temporary) misunderstanding of one's concepts (as opposed to outright Burgean misunderstanding). This pathology may be understood on analogy with a patient who is given a clean bill of health at his annual check-up, despite his having a cold at the time of the check-up: although the patient's health is locally (temporarily) disrupted, his overall health is sufficiently good to enable him to overcome the cold without external intervention. Even when our understanding of certain pivotal concepts has lapsed locally, our larger body of intuitions is sufficiently reliable to allow us, without intervention, to ferret out the modal errors resulting from this lapse of understanding by means of dialectic and/or a process of a priori reflection. This source of modal error, and our capacity to overcome it, has wide-ranging implications for philosophical method - including, in particular, its promise for disarming skepticism about the classical method of intuition-driven investigation itself. Indeed, it is shown that skeptical accounts of modal error (e.g., the accounts given by Hill, Levin, and several others) are ultimately self-defeating. (shrink)
This paper evaluates an argument for the meta-philosophical conclusion that in order to produce a viable objection to a particular error theory, the objection must not be applicable to any error theory. The reason given for this conclusion is that error theories about some discourses are uncontroversial. But the examples given of uncontroversial error theories are not good ones, nor do there appear to be other examples available.
In this paper I defend what I call the argument from epistemic reasons against the moral error theory. I argue that the moral error theory entails that there are no epistemic reasons for belief and that this is bad news for the moral error theory since, if there are no epistemic reasons for belief, no one knows anything. If no one knows anything, then no one knows that there is thought when they are thinking, and no one (...) knows that they do not know everything. And it could not be the case that we do not know that there is thought when we believe that there is thought and that we do not know that we do not know everything. I address several objections to the claim that the moral error theory entails that there are no epistemic reasons for belief. It might seem that arguing against the error theory on the grounds that it entails that no one knows anything is just providing a Moorean argument against the moral error theory. I show that even if my argument against the error theory is indeed a Moorean one, it avoids Streumer's, McPherson's and Olson's objections to previous Moorean arguments against the error theory and is a more powerful argument against the error theory than Moore's argument against external world skepticism is against external world skepticism. (shrink)
This paper surveys contemporary accounts of error theory and fictionalism. It introduces these categories to those new to metaethics by beginning with moral nihilism, the view that nothing really is right or wrong. One main motivation is that the scientific worldview seems to have no place for rightness or wrongness. Within contemporary metaethics there is a family of theories that makes similar claims. These are the theories that are usually classified as forms of error theory or fictionalism though (...) there are different ways of accepting some form of the view that nothing is really write or wrong. A range of different ways of going in the light of such a realization is also proposed. The resulting taxonomy of positions is quite complicated and sometimes surprising. One surprise will be that some positions plausibly classified as error theories or forms of fictionalism do not quite seem to be forms of nihilism. (shrink)
In this paper we introduce a paradigm of experiment which, we believe, is of interest both in psychology and philosophy. There the subject wears an HMD (head-mount display), and a camera is set up at the upper corner of the room, in which the subject is. As a result, the subject observes his own body through the HMD. We will mainly focus on the philosophical relevance of this experiment, especially to the thesis of so-called 'immunity to error through misidentification (...) relative to the first-person pronoun'. We will argue that one experiment conducted in this setting, which we call the bodily illusion experiment, provides a counterexample to that thesis. (shrink)
Philosophers should consider a hybrid meta-ethical theory that includes elements of both moral expressivism and moral error theory. Proponents of such an expressivist-error theory hold that all moral utterances are either expressions of attitudes or expressions of false beliefs. Such a hybrid theory has two advantages over pure expressivism, because hybrid theorists can offer a more plausible account of the moral utterances that seem to be used to express beliefs, and hybrid theorists can provide a simpler solution to (...) the Frege-Geach problem. The hybrid theory has three advantages over pure error theory, because hybrid theorists can offer a more plausible account of the moral utterances that seem to be used to express attitudes, hybrid theorists can more easily explain moral motivation, and hybrid theorists can avoid the implausible claim that all moral discourse is radically mistaken. Accordingly, such a hybrid theory should be more attractive than pure expressivism or pure error theory to philosophers who are skeptical about moral facts and truth. (shrink)
At least since Democritus, philosophers have been fond of the idea that material objects do not “really” have color. One such view is the error theory, according to which our ordinary judgments ascribing colors to objects are all erroneous, false; no object has any color at all. The error theorist proposes that everything that is so, including the fact that material objects appear to us to have color, can be explained without ever attributing color to objects—by appealing merely (...) to, e.g., surface reflectance properties, the nature of light, the neurophysiology of perceivers, and so on. The appeal of the error theory stems in significant part from the prevalent thought that such explanations are strongly suggested by our present scientific conception of the world.1. (shrink)
In this paper, I distinguish between two error theories of morality: one couched in terms of truth (ET1); the other in terms of justification (ET2). I then present two arguments: the Poisoned Presupposition Argument for ET1; and the Evolutionary Debunking Argument for ET2. I go on to show how assessing these arguments requires paying attention to empirical moral psychology, in particular, work on folk metaethics. After criticizing extant work, I suggest avenues for future research.
According to moral error theory, moral discourse is error-ridden. Establishing error theory requires establishing two claims. These are that moral discourse carries a non-negotiable commitment to there being a moral reality and that there is no such reality. This paper concerns the first and so-called non-negotiable commitment claim. It starts by identifying the two existing argumentative strategies for settling that claim. The standard strategy is to argue for a relation of conceptual entailment between the moral statements that (...) comprise moral discourse and the statement that there is a moral reality. The non-standard strategy is to argue for a presupposition relation instead. Error theorists have so far failed to consider a third strategy, which uses a general entailment relation that doesn’t require intricate relations between concepts. The paper argues that both entailment claims struggle to meet a new explanatory challenge and that since the presupposition option doesn’t we have prima facie reason to prefer it over the entailment options. The paper then argues that suitably amending the entailment claims enables them to meet this challenge. With all three options back on the table the paper closes by arguing that error theorists should consider developing the currently unrecognised, non-conceptual entailment claim. (shrink)
Contemporary accounts of the self-ascription of experiences are wedded to two basic dogmas. The first is that self-ascription is immune to error through misidentification relative to the first person (IEM). The second dogma is that there is distinction between awareness of oneself qua subject and awareness of oneself qua object (the SCS/SCO distinction). In this paper, I urge that these dogmas are groundless. First, I illustrate that claims about immunity to error through misidentification are usually based upon claims (...) about awareness of oneself qua subject. Self-ascriptions are IEM, because self-ascriptions involve awareness of oneself qua subject. Following Sydney Shoemaker, philosophers appeal to Wittgenstein’s discussion of the I-as-subject to bolster this claim. I argue that this interpretation of Wittgenstein is actually a crossbreed of the views of Shoemaker and Wittgenstein, which I will call ‘Shoegenstein.’ I argue that Shoegenstein is not Wittgenstein. Apart from these historical considerations, I argue that if IEM is based on the SCS/SCO distinction, and there is no non-circular account of that distinction, then IEM is not based on anything. I suggest that we should understand self-consciousness as awareness of a subject as an object, which would mean that SCS and SCO are not exclusive. One consequence of disposing of these two dogmas is to allow for a positive naturalistic account of self-ascription. Another consequence is to present an approach to self-ascription that stresses the lived position of the subject, which I urge is friendly to Wittgenstein’s later account of the subject of self-ascription. (shrink)
Contents 1. Introduction 2. Reward-Guided Decision Making 3. Content in the Model 4. How to Deflate a Metarepresentational Reading Proust and Carruthers on metacognitive feelings 5. A Deflationary Treatment of RPEs? 5.1 Dispensing with prediction errors 5.2 What is use of the RPE focused on? 5.3 Alternative explanations—worldly correlates 5.4 Contrast cases 6. Conclusion Appendix: Temporal Difference Learning Algorithms.
Let us say that the proposition that p is transparent just in case it is known that p, and it is known that it is known that p, and it is known that it is known that it is known that p, and so on, for any number of iterations of the knowledge operator ‘it is known that’. If there are transparent propositions at all, then the claim that any man with zero hairs is bald seems like a good candidate. (...) We know that any man with zero hairs is bald. And it also does not seem completely implausible that we know that we know it, and that we know that we know that we know it, and so on. (shrink)
Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to (...) fat tails, regardless of the probability distribution used. A mere .01% branching error rate about the STD (itself an error rate), and .01% branching error rate about that error rate, etc. (recursing all the way) results in explosive (and infinite) moments higher than 1. Missing any degree of regress leads to the underestimation of small probabilities and concave payoffs (a standard example of which is Fukushima). The paper states the conditions under which higher order rates of uncertainty (expressed in spreads of counterfactuals) alters the shapes the of final distribution and shows which a priori beliefs about conterfactuals are needed to accept the reliability of conventional probabilistic methods (thin tails or mildly fat tails). (shrink)
Applying and extending principles that can help prevent consumer error, worker fault, managerial mistakes, and organizational blunders, Human Error: Causes and Control provides useful information on theories, methods, and specific techniques for controlling human error. It forms a how-to manual of good practice, focusing on identifying human error, its causes, and how to control or prevent it. It presents constructs that assist in optimizing human performance and to achieve higher safety goals. Human Error: Causes and (...) Control bridges the gap and illustrates the means for achieving a comprehensive, fully integrated, process compatible, user effective, methodologically sound model. (shrink)
Medical error is a leading problem of health care in the United States. Each year, more patients die as a result of medical mistakes than are killed by motor vehicle accidents, breast cancer, or AIDS. While most government and regulatory efforts are directed toward reducing and preventing errors, the actions that should follow the injury or death of a patient are still hotly debated. According to Nancy Berlinger, conversations on patient safety are missing several important components: religious voices, traditions, (...) and models. In After Harm, Berlinger draws on sources in theology, ethics, religion, and culture to create a practical and comprehensive approach to addressing the needs of patients, families, and clinicians affected by medical error. She emphasizes the importance of acknowledging fallibility, telling the truth, confronting feelings of guilt and shame, and providing just compensation. After Harm adds important human dimensions to an issue that has profound consequences for patients and health care providers. (shrink)
Is there not any place in the history of ideas for the imperfect character of human doings (i.e. capability of error) that is repeated for so long until we lately start to think that it had long been wrong? The answer is: In the conventional histories of ideas there is almost none. The importance of the phenomenon,however, is immense. Intellectual history is full of errors. Scholarly errors are among the factors that generate intellectual pathways in which consequences of historical (...) small events feed back up on each other positively and give rise to historical pathologies in the end. Pathways hold the intellectuals dependent on the consequences of errors which interact upon each other and prevent resulting pathologies to disappear fully. As a result, ideas do not converge to a level of perfection. Evolutionary account of errors suggests that errors in the history of ideas matter even though they are often corrected. (shrink)
Many contemporary philosophers rate error theories poorly. We identify the arguments these philosophers invoke, and expose their deficiencies. We thereby show that the prospects for error theory have been systematically underestimated. By undermining general arguments against all error theories, we leave it open whether any more particular arguments against particular error theories are more successful. The merits of error theories need to be settled on a case-by-case basis: there is no good general argument against (...) class='Hi'>error theories. (shrink)
Moral error theory of the kind defended by J. L. Mackie and Richard Joyce is premised on two claims: (1) that moral judgements essentially presuppose that moral value has absolute authority, and (2) that this presupposition is false, because nothing has absolute authority. This paper accepts (2) but rejects (1). It is argued first that (1) is not the best explanation of the evidence from moral practice, and second that even if it were, the error theory would still (...) be mistaken, because the assumption does not contaminate the meaning or truth-conditions of moral claims. These are determined by the essential application conditions for moral concepts, which are relational rather than absolute. An analogy is drawn between moral judgements and motion judgements. (shrink)
Mackie's argument for the Error Theory is described. Four ways of responding to Mackie's argument—the Instrumental Approach, the Universalization Approach, the Reasons Approach, and the Constitutivist Approach—are outlined and evaluated. It emerges that though the Constitutivist Approach offers the most promising response to Mackie's argument, it is difficult to say whether that response is adequate or not.
According to the error theory, normative judgements are beliefs that ascribe normative properties to objects, even though such properties do not exist. In this paper, I argue that we cannot fully believe the error theory, and that this means that there is no reason for us to fully believe this theory. It may be thought that this is a problem for the error theory, but I argue that it is not. Instead, I argue, our inability to fully (...) believe the error theory undermines many objections that have been made to this theory. (shrink)
The paper explores the consequences of adopting a moral error theory targeted at the notion of reasonable convergence. I examine the prospects of two ways of combining acceptance of such a theory with continued acceptance of moral judgements in some form. On the first model, moral judgements are accepted as a pragmatically intelligible fiction. On the second model, moral judgements are made relative to a framework of assumptions with no claim to reasonable convergence on their behalf. I argue that (...) the latter model shows greater promise for an error theorist whose commitment to moral thought is initially serious. (shrink)
The paper distinguishes three strategies by means of which empirical discoveries about the nature of morality can be used to undermine moral judgements. On the first strategy, moral judgements are shown to be unjustified in virtue of being shown to rest on ignorance or false belief. On the second strategy, moral judgements are shown to be false by being shown to entail claims inconsistent with the relevant empirical discoveries. On the third strategy, moral judgements are shown to be false in (...) virtue of being shown to be unjustified; truth having been defined epistemologically in terms of justification. By interpreting three recent error theoretical arguments in light of these strategies, the paper evaluates the epistemological and metaphysical relevance of empirical discoveries about morality as a naturally evolved phenomenon. (shrink)
Most content externalists concede that even if externalism is compatible with the thesis that one has authoritative self-knowledge of thought contents, it is incompatible with the stronger claim that one is always able to tell by introspection whether two of one’s thought tokens have the same, or different, content. If one lacks such authoritative discriminative self-knowledge of thought contents, it would seem that brute logical error – non-culpable logical error – is possible. Some philosophers, such as Paul Boghossian, (...) have argued that this would present a big problem for externalism, forcing the externalist to overhaul our norms of rationality. I consider several externalist strategies to block this possibly unhappy epistemological consequence, but I argue that they all fail. (shrink)
Ordinary moral thought often commits what social psychologists call 'the fundamental attribution error'. This is the error of ignoring situational factors and overconfidently assuming that distinctive behaviour or patterns of behaviour are due to an agent's distinctive character traits. In fact, there is no evidence that people have character traits (virtues, vices, etc.) in the relevant sense. Since attribution of character traits leads to much evil, we should try to educate ourselves and others to stop doing (...) it. (shrink)
A common first reaction to expressivist and quasi-realist theories is the thought that, if these theories are right, there's some objectionable sense in which we can't be wrong about morality. This worry turns out to be surprisingly difficult to make stick - an account of moral error as instability under improving changes provides the quasi-realist with the resources to explain many of our concerns about moral error. The story breaks down, though, in the case of fundamental moral (...) class='Hi'>error. This is where the initial worry finally sticks - quasi-realism tells me that I can't be fundamentally wrong about morality, though others can. (shrink)
Abstract: Some first person statements, such as ‘I am in pain’, are thought to be immune to error through misidentification (IEM): I cannot be wrong that I am in pain because—while I know that someone is in pain—I have mistaken that person for myself. While IEM is typically associated with the self-ascription of psychological properties, some philosophers attempt to draw anti-Cartesian conclusions from the claim that certain physical self-ascriptions are also IEM. In this paper, I will examine whether some (...) physical self-ascriptions are in fact IEM, and—if they are—what role that fact is supposed to play in arguments for the anti-Cartesian claim that self-consciousness is consciousness of oneself as a material object. I will argue that if we accept the assumptions required to show that physical self-ascriptions are IEM, then IEM cannot play the role it needs to play in these anti-Cartesian arguments. (shrink)
In his paper ?The Error in the Error Theory?[this journal, 2008], Stephen Finlay attempts to show that the moral error theorist has not only failed to prove his case, but that the error theory is in fact false. This paper rebuts Finlay's arguments, criticizes his positive theory, and clarifies the error-theoretic position.
Timothy Williamson has provided damaging counterexamples to Robert Nozick’s sensitivity principle. The examples are based on Williamson’s anti-luminosity arguments, and they show how knowledge requires a margin for error that appears to be incompatible with sensitivity. I explain how Nozick can rescue sensitivity from Williamson’s counterexamples by appeal to a specific conception of the methods by which an agent forms a belief. I also defend the proposed conception of methods against Williamson’s criticisms.
To hold an error theory about morality is to endorse a kind of radical moral skepticism—a skepticism analogous to atheism in the religious domain. The atheist thinks that religious utterances, such as “God loves you,” really are truth-evaluable assertions (as opposed to being veiled commands or expressions of hope, etc.), but that the world just doesn’t contain the items (e.g., God) necessary to render such assertions true. Similarly, the moral error theorist maintains that moral judgments are truth-evaluable assertions (...) (thus contrasting with the noncognitivist), but that the world doesn’t contain the properties (e.g., moral goodness, evil, moral obligation) needed to render moral judgments true. In other words, moral discourse aims at the truth but systematically fails to secure it. If there is no such property as moral wrongness, for example, then no judgment of the form “X is morally wrong” will be true (where “X” denotes an actual action or state of affairs). Advocates of this position include Hinckfuss 1987; Joyce 2001; Mackie 1977 (see MACKIE, J. L.). Various forms of moral skepticism—some of which are arguably instances of the error theoretic stance—have been familiar to philosophers since ancient times. (See SKEPTICISM, MORAL.) Error theoretic views can be controversial—as in the case of religion and morality—or widely agreed upon—as in the case of ghosts and phlogiston. It is important to note that error theorists maintain that the judgments in question are erroneous not merely because of the absence of any objective moral facts sufficient to render them true, but also because of the absence of any non-objective moral facts sufficient to render them true. There is, for example, a kind of moral realist who maintains that moral properties are objective features of the universe (see REALISM, MORAL). There is also a family of metaethical views according to which moral properties are in some manner constituted by us—by our beliefs, attitudes, practices, etc.. (shrink)
Abū Hāmid al-Ghazālī (1058–1111 c.e .) is well known, among other things, for his account, in al-Munqidh min al-ḍalāl (Deliverance from error), of a struggle with philosophical skepticism that bears a striking resemblance to that described by Descartes in the Meditations . This essay aims to give a close comparative analysis of these respective accounts, and will concentrate solely on the processes of invoking or entertaining doubt that al-Ghazālī and Descartes describe, respectively. In the process some subtle differences between (...) them in this regard will be brought to light that are relevant to the comparative issue of the respective solutions at which they arrive. The latter issue will not be touched upon here, although the present discussion is intended as a prelude to a future treatment of that topic. (shrink)
Epistemologists generally agree that the stringency of intuitive ascriptions of knowledge is increased when unrealized possibilities of error are mentioned. Non-sceptical invariantists (Williamson, Hawthorne) think it a mistake to yield in such cases to the temptation to be more stringent, but they do not deny that we feel it. They contend that the temptation is best explained as the product of a psychological bias known as the availability heuristic. I argue against the availability explanation, and sketch a rival account (...) of what happens to us psychologically when possibilities of error are raised. (shrink)
My paper has three parts. First I will outline the act/object theory of perceptual experience and its commitments to (a) a relational view of experience and (b) a view of phenomenal character according to which it is constituted by the character of the objects of experience. I present the traditional adverbial response to this, in which experience is not to be understood as a relation to some object, but as a way of sensing. In the second part I argue that (...) acceptance of (a) is independent of acceptance of (b). I then present a modified adverbialism that presents experience as relational in nature but whose character is nevertheless to be explained in terms of the way in which one senses an object. Finally, I will offer an explanation of how a naïve realist about experience can adopt this modified adverbialism and in so doing accommodate the possibility of perceptual error. (shrink)
Davidson’s error theory about metaphorical meaning has rightly commanded a lot of critical attention over the last twenty five or so years. Each component of that theory – the case for antirealism about metaphorical meanings, the diagnosis of the mistakes that led theorists to falsely ascribe such semantic properties to words and sentences, the suggested functional replacement of such talk in terms of the effects that metaphorical utterances bring about – has been examined, reformulated and criticised. The evaluation of (...) the theory has been far from uniformly negative. It is widely recognized, even by realists about metaphorical meaning, that the ‘conventional wisdom’ about ‘discerning two senses of the predicate term’ that Beardsley had adverted to three years earlier, was shown to be misguided by the considerations that Davidson’s paper brought to bear. Contemporary recognition of the importance of elucidating the dependence of metaphorical language upon its literal base, and upon its context of utterance, can also be seen to have resulted from sustained critical engagement with Davidson’s article. (shrink)
John Campbell (1999) has recently maintained that the phenomenon of thought insertion as it is manifested in schizophrenic patients should be described as a case in which the subject is introspectively aware of a certain thought and yet she is wrong in identifying whose thought it is. Hence, according to Campbell, the phenomenon of thought insertion might be taken as a counterexample to the view that introspection-based mental selfascriptions are logically immune to error through misidentification (IEM, hereafter). Thus, if (...) Campbell is right, it would not be true that when the subject makes a mental self-ascription on the basis of introspective awareness of a given mental state, there is no possible world in which she could be wrong as to whether it is really she who has that mental state. Notice the interesting interdisciplinary implications of Campbell’s project: on the one hand, a fairly precise notion elaborated in philosophy such as IEM (and the related notion of error through misidentification, EM hereafter) is used to describe a characteristic symptom of schizophrenia.1 On the other hand, such a phenomenon, described in the way proposed, is taken to be a possible counterexample to a sort of “philosophical dogma” such as IEM of introspection-based non-inferential mental self-ascriptions. In the first section of the paper I will point out the characteristic features of EM and explain logical immunity to error through misidentification of introspection-based mental self-ascriptions; in the second section I will consider the case of thought insertion in more detail and show why, after all, it is not a counterexample to the view that introspectionbased mental self-ascriptions are logically IEM. Finally, I will offer a re-description of the phenomenon of thought insertion. (shrink)
In The Blue Book, Wittgenstein defined a category of uses of “I” which he termed “I”-as-subject, contrasting them with “I”-as-object uses. The hallmark of this category is immunity to error through misidentification (IEM). This article extends Wittgenstein’s characterisation to the case of memory-judgments, discusses the significance of IEM for self-consciousness—developing the idea that having a first-person thought involves thinking about oneself in a distinctive way in which one cannot think of anyone or anything else—and refutes a common objection to (...) the claim that memory-judgments exhibit IEM. (shrink)
Theories of content purport to explain, among other things, in virtue of what beliefs have the truth conditions they do have. The desire for such a theory has many sources, but prominent among them are two puzzling (and related) facts that are notoriously difficult to explain: beliefs can be false, and there are normative constraints on the formation of beliefs.2 If we knew in virtue of what beliefs had truth conditions, we would be better positioned to explain how it is (...) possible for an agent to believe that which is not the case. Moreover, we do not say merely of such an agent that he believes that p when p is not the case. We say the agent made a mistake, and often criticize him accordingly; we think agents ought not have false beliefs, and that such beliefs should be changed; etc. An adequate theory of content would, presumably, reveal the source of these normative facts about the mental lives of agents. Indeed, it is typically taken to be an adequacy constraint on a theory of content that it help explain the possibility of error and the "normativity" of content. Teleological theories of content promise to do just this. (shrink)
Research in experimental epistemology has revealed a great, yet unsolved mystery: why do ordinary evaluations of knowledge ascribing sentences involving stakes and error appear to diverge so systematically from the predictions professional epistemologists make about them? Two recent solutions to this mystery by Keith DeRose (2011) and N. Ángel Pinillos (2012) argue that these differences arise due to specific problems with the designs of past experimental studies. This paper presents two new experiments to directly test these responses. Results vindicate (...) previous findings by suggesting that (i) the solution to the mystery is not likely to be based on the empirical features these theorists identify, and (ii) that the salience of ascriber error continues to make the difference in folk ratings of third-person knowledge ascribing sentences. (shrink)
A self-ascription is a thought or sentence in which a predicate is self-consciously ascribed to oneself. Self-ascriptions are best expressed using the first-person pronoun. Mental self-ascriptions are ascriptions to oneself of mental predicates (predicates that designate mental properties), non-mental self-ascriptions are ascriptions to oneself of non-mental predicates (predicates that designate non-mental properties). It is often claimed that there is a range of self-ascriptions that are immune to error through misidentification relative to the first-person pronoun (IEM for short). What this (...) means, and exactly which self-ascriptions are properly classed as IEM, is a topic hotly disputed. Some claim that only mental self-ascriptions are IEM, others claim that some non-mental self-ascriptions are IEM. Before this question can be decided, it needs to be judged exactly what it means to say that a self-ascription is IEM. And here we stumble across the fact that there are, at least, two non-equivalent ways of defining the phenomenon1. I will be claiming that one of these definitions should be rejected. (shrink)
In replying to my article ‘An Error about the Doctrine of Double Effect’, Kaufman claims that the permission given by the four-condition Doctrine for certain mixed actions is merely complementary to an absolute prohibition—which he claims is the DDE's primary function. I point out again that in many cases this makes an appeal to the DDE's fourth condition not merely redundant but incoherent. Furthermore, his claim that I am a utilitarian maximizer, frustrated by a doctrine prohibiting intentional harms, however (...) great the net overall benefit, is based on a misrepresentation. I did not object to a candidate for justification under the DDE being rejected before reaching the fourth condition, only to its being accepted. (shrink)
Roy Sorensen's criticism of my use of margin for error principles to explain ignorance in borderline cases fails because it misidentifies the relevant margin for error principles. His alternative explanation in terms of truth-maker gaps is briefly criticized.
Lewisian reference magnetism about linguistic content determination [Lewis 1983 has been defended in recent work by Weatherson  and Sider , among others. Two advantages claimed for the view are its capacity to make sense of systematic error in speakers' use of their words, and its capacity to distinguish between verbal and substantive disagreements. Our understanding of both error and disagreement is linked to the role of usage and first order intuitions in semantics and in linguistic theory more (...) generally. I argue, partially on the basis of these more general considerations, that reference magnetism delivers implausible results. Specifically, I argue that the proponent of reference magnetism maintains her analysis of genuinely systematic error at the cost of an empirically unjustifiable error theory regarding ordinary usage. In response, I describe an alternative view of content determination?MUMPS, or Meaning is Use Minus Pragmatics?which is not committed to such error theories. Despite this advantage, MUMPS has high prima facie costs. On such a view, there is a great deal of variation in linguistic meaning across speakers and times. As a result, a large number of seemingly mistaken claims are analysed as expressing true propositions. Correspondingly, a large number of seemingly substantive disagreements are analysed as terminological. However, I argue that these consequences are not as costly as they seem. Despite appearances, MUMPS is consistent with objective, metaphysically realist adjudication of disagreements, even in cases where meanings are not shared and where both parties to a dispute speak truly. MUMPS thus allows for a more nuanced understanding of linguistic usage, change, and variation, without imposing a commitment to any form of metaphysical anti-realism. (shrink)
Color relationalism is the view that colors are constituted in terms of relations to perceiving subjects. Among its explanatory virtues, relation- alism provides a satisfying treatment of cases of perceptual variation. But it can seem that relationalists lack resources for saying that a representa- tion of x’s color is erroneous. Surely, though, a theory of color that makes errors of color perception impossible cannot be correct. In this paper I’ll argue that, initial appearances notwithstanding, relationalism contains the resources to account (...) for errors of color perception. I’ll conclude that worries about making room for error are worries the relationalist can meet. (shrink)
In their paper “Vagueness, Ignorance, and Margins for Error” Kenton Machina and Harry Deutsch criticize the epistemic theory of vagueness. This paper answers their objections. The main issues discussed are: the relation between meaning and use; the principle of bivalence; the ontology of vaguely specified classes; the proper form of margin for error principles; iterations of epistemic operators and semantic compositionality; the relation or lack of it between quantum mechanics and theories of vagueness.
Though he maintained a significant interest in theoretical aspects of measurement, Henry E. Kyburg, Jr. was critical of the representational theory that in many ways has come to dominate discussions concerning the foundations of measurement. In particular, Kyburg (in Savage and Ehrlich (eds) Philosophical and foundational issues in measurement theory, 1992 ) asserts that the representational theory of measurement, as introduced in (Scott and Suppes, Journal of Symbolic Logic, 23:113–128, 1958 ) and developed in (Krantz et al., Foundations of measurment: (...) additive and polynomial representations. Academic Press, 1971 ), cannot account for the measurement of error. The present work examines and responds to this charge. (shrink)
In chapter 5 of Knowledge and its Limits, T. Williamson formulates an argument against the principle (KK) of epistemic transparency, or luminosity of knowledge, namely “that if one knows something, then one knows that one knows it”. Williamson’s argument proceeds by reductio: from the description of a situation of approximate knowledge, he shows that a contradiction can be derived on the basis of principle (KK) and additional epistemic principles that he claims are better grounded. One of them is a reflective (...) form of the margin for error principle defended by Williamson in his account of knowledge. We argue that Williamson’s reductio rests on the inappropriate identification of distinct forms of knowledge. More specifically, an important distinction between perceptual knowledge and non-perceptual knowledge is wanting in his statement and analysis of the puzzle. We present an alternative account of this puzzle, based on a modular conception of knowledge: the (KK) principle and the margin for error principle can coexist, provided their domain of application is referred to the right sort of knowledge. (shrink)
One way in which philosophy of science can perform a valuable normative function for science is by showing characteristic errors made in scientific research programs and proposing ways in which such errors can be avoided or corrected. This paper examines two errors that have commonly plagued research in biology and psychology: 1) functional localization errors that arise when parts of a complex system are assigned functions which these parts are not themselves able to perform, and 2) vacuous functional explanations in (...) which one provides an analysis that does account for the inputs and outputs of a system but does not employ the same set of functions to produce this output as does the natural system. These two kinds of error usually arise when researchers limit their investigation to one type of evidence. Historically, correction of these errors has awaited researchers who have employed the opposite type of evidence. This paper explores the tendency to commit these errors by examining examples from historical and contemporary science and proposes a dialectical process through which researchers can avoid or correct such errors in the future. (shrink)
Åsa Maria Wikforss has proposed a response to Burge's thought-experiments in favour of social externalism, one which allows the individualist to maintain that narrow content is truth-conditional without being idiosyncratic. The narrow aim of this paper is to show that Wikforss' argument against social externalism fails, and hence that the individualist position she endorses is inadequate. The more general aim is to attain clarity on the social externalist thesis. Social externalism need not rest, as is typically thought, on the possibility (...) of incomplete linguistic understanding or conceptual error. I identify the unifying principle that underlies the various externalist thought-experiments. (shrink)
Philosophers of mind often assume that methodological solipsism, as outlined in the Second Meditation, is Descartes' last bid on the nature of mental life. This paper argues, instead, that it is a transitional position he overcomes in the dynamic progression of his philosophical therapy. The Third Meditation questions the methodological solipsism that in fact owes much to (received) Cartesian dualism for its dissemination. Descartes' treatment of error has important analogies with Wittgenstein's private language argument. As Lévinas emphasises in his (...) dialogical philosophy, Descartes' proof of God via the concept of error involves recognition of the irreducibility of the Other. (shrink)
After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.
The authors attempt to show that certain forms of behavior of the human immune system are illuminatingly regarded as errors in that system's operation. Since error-ascription can occur only within the context of an intentional/teleological characterization of the system, it follows that such a characterization is illuminating. It is argued that error-ascription is objective, non-anthropomorphic, irreducible to any purely causal form of explanation of the same behavior, and further that it is wrong to regard all errors of the (...) immune system as due to malfunction or maladaptation. <br>. (shrink)
We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical modeling, (...) experimental design, and related fields) that affords forward looking procedures for learning from error and for controlling error. The resulting “error statistical” appraisal is empirical—methods are appraised by examining their capacities to control error. At the same time, this account is normative, in that the strategies that pass muster are claims about how actually to proceed in given contexts to reach reliable inferences from limited data. (shrink)
Quantum computers are hypothetical quantum information processing (QIP) devices that allow one to store, manipulate, and extract information while harnessing quantum physics to solve various computational problems and do so putatively more efficiently than any known classical counterpart. Despite many ‘proofs of concept’ (Aharonov and Ben–Or 1996; Knill and Laflamme 1996; Knill et al. 1996; Knill et al. 1998) the key obstacle in realizing these powerful machines remains their scalability and susceptibility to noise: almost three decades after their conceptions, experimentalists (...) still struggle to maintain useful quantum coherence in QIP devices with more than a pair of qubits (e.g., Blatt and Wineland 2008). This slow progress has prompted debates on the feasibility of quantum computers, yet the quantum information community has dismissed the skepticism as “ideology” (Aaronson 2004), claiming that the obstacles are merely technological (Kaye et al. 2007, 240). In a recent paper (Hagar 2009) I’ve argued that such a skepticism with respect to the feasibility of quantum computers need not be deemed ideological at all, and that the aforementioned ‘proofs of concept’ are physically suspect. Using analogies from the foundations of classical statistical mechanics (SM), I’ve also argued that instead of active error correction, the appropriate framework for debating the feasibility of large–scale, fault–tolerant and computationally superior quantum computers should be the project of error avoidance: rather than trying to constantly ‘cool down’ the QIP device and prevent its thermalization, one should try to locate those regions in the device’s state space which are thermodynamically ‘abnormal’, i.e., those regions in the device’s state space which resist thermalization regardless of external noise. This paper is intended as a further contribution to the debate on the feasibility of large–scale, fault–tolerant and computationally superior quantum computers. Relying again on analogies from the foundations of classical SM, it suggests a skeptical conjecture and frames it in the ‘passive’, error avoidance, context.. (shrink)
I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing (...) auxiliary hypotheses from that of appraising a primary hypothesis. By discriminating patterns of error, this approach can at least block, if not also severely test, attempted explanations of an anomaly. I illustrate how this approach directs progress with Duhemian problems and explains how scientists actually grapple with them. (shrink)
We discuss recent work in experimental philosophy on free will and moral responsibility and then present a new study. Our results suggest an error theory for incompatibilist intuitions. Most laypersons who take determinism to preclude free will and moral responsibility apparently do so because they mistakenly interpret determinism to involve fatalism or “bypassing” of agents’ relevant mental states. People who do not misunderstand determinism in this way tend to see it as compatible with free will and responsibility. We discuss (...) why these results pose a challenge to incompatibilists. (shrink)
The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the (...) primary criterion for ‘best', the mathematical approximation perspective undermines the reliability of inference objective by giving rise to selection rules which pay insufficient attention to ‘accounting for the regularities in the data'. A more appropriate framework is offered by the error-statistical approach, where (i) statistical adequacy provides the criterion for assessing when a curve captures the regularities in the data adequately, and (ii) the relevant error probabilities can be used to assess the reliability of inductive inference. Broadly speaking, the fittest curve (statistically adequate) is not determined by the smallness if its residuals, tempered by simplicity or other pragmatic criteria, but by the nonsystematic (e.g. white noise) nature of its residuals. The advocated error-statistical arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds. ‡I am grateful to Deborah Mayo and Clark Glymour for many valuable suggestions and comments on an earlier draft of the paper; estimating the Ptolemaic model was the result of Glymour's prompting and encouragement. †To contact the author, please write to: Department of Economics, Virginia Tech 3019 Pamplin Hall (0316), Blacksburg, VA 24061; e-mail: firstname.lastname@example.org. (shrink)
Timothy Williamson’s potentially most important contribution to epistemicism about vagueness lies in his arguments for the basic epistemicist claim that the alleged cut-off points of vague predicates are not knowable. His arguments for this are based on so-called ‘margin for error principles’. This paper argues that these principles fail to provide a good argument for the basic claim. Williamson has offered at least two kinds of margin for error principles applicable to vague predicates. A certain fallacy of equivocation (...) (on the meaning of ‘knowable’) seems to underlie his justification for both kinds of principles. Besides, the margin for error principles of the first kind can be used in the derivation of unacceptable consequences, while the margin for error principles of the second kind can be shown to be compatible with the falsity of epistemicism, under a number of assumptions acceptable to the epistemicist. (shrink)
Michael Ruses Darwinian metaethics has come under just criticism from Peter Woolcock (1993). But with modification it remains defensible. Ruse (1986) holds that people ordinarily have a false belief that there are objective moral obligations. He argues that the evolutionary story should be taken as an error theory, i.e., as a theory which explains the belief that there are obligations as arising from non-rational causes, rather than from inference or evidential reasons. Woolcock quite rightly objects that this position entails (...) moral nihilism. However, I argue here that people generally have justified true beliefs about which acts promote their most coherent set of moral values, and hence, by definition, about which acts are right. What the evolutionary story explains is the existence of these values, but it is not an error theory for moral beliefs. Ordinary beliefs correspond to real moral properties, though these are not objective or absolute properties independent of anyones subjective states. On its best footing, therefore, a Darwinian metaethics of the type Ruse offers is not an error theory and does not entail moral nihilism. (shrink)
Epistemicists say there is a last positive instance in a sorites sequence-we just cannot know which is the last. Timothy Williamson explains that knowledge requires a margin for error and this ensures that the last heap will not be knowable as a heap. However, there is a class of disjunctive predicates for which knowledge at the thresholds is possible. They generate sorites paradoxes that cannot be diagnosed with the margin for error principle.
Byrne & Hilbert (B&H) combine physicalism about color with intentionalism about color experience. I argue that this combination leads to an “error theory” about color experience, that is, the doctrine that color experience is systematically illusory. But this conflicts with another aspect of B&H's position, namely, the denial of error theory.
Kant’s concept of conscience has been largely neglected by scholars and contemporary moral philosophers alike, as has his concept of “indirect” duty. Admittedly, neither of them is foundational within his ethical theory, but a correct account of both in their own right and in combination can shed some new light on Kant’s moral philosophy as a whole. In this paper, I first examine a key passage in which Kant systematically discusses the role of conscience, then give a systematic account of (...) “indirect” duties and the function of hypothetical imperatives in the course of their generation. I then turn to the possibility of moral error and the part “indirect” duty can play in its prevention. In conclusion, I try to show how clarifying the concept of “indirect” duty can help us to shed light on the nature of Kantian ethics as a whole. (shrink)
In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly appeal (e.g., error probabilities). Building on my co-symposiasts contributions, I suggest some directions in which (...) a new and more adequate philosophy of evidence can move. (shrink)
Epistemological realists have long struggled to explain perceptual error without introducing a tertium quid between perceivers and physical objects. Two leading realist philosophers, Thomas Reid and Everett Hall, agreed in denying that mental entities are the immediate objects of perceptions of the external world, but each relied upon strange metaphysical entities of his own in the construction of a realist philosophy of perception. Reid added ‘visible figures’ to sensory impressions and specific sorts of mental events, while Hall utilized an (...) array of ways that he maintained properties may participate in the world. This paper assesses each realist's attempt to explain perceptual relativity and illusion without contradicting either the science of his time or the structure of common sense. (shrink)
Suppose that the human tendency to think of certain actions andomissions as morally required – a notion that surely lies at the heart of moral discourse – is a trait that has been naturallyselected for. Many have thought that from this premise we canjustify or vindicate moral concepts. I argue that this is mistaken, and defend Michael Ruse''s view that the moreplausible implication is an error theory – the idea thatmorality is an illusion foisted upon us by evolution. Thenaturalistic (...) fallacy is a red herring in this debate,since there is really nothing that counts as a fallacy at all. If morality is an illusion, it appears to followthat we should, upon discovering this, abolish moraldiscourse on pain of irrationality. I argue that thisconclusion is too hasty, and that we may be able usefullyto employ a moral discourse, warts and all, withoutbelieving in it. (shrink)
Drawing primarily on the Mòzǐ and Xúnzǐ, the article proposes an account of how knowledge and error are understood in classical Chinese epistemology and applies it to explain the absence of a skeptical argument from illusion in early Chinese thought. Arguments from illusion are associated with a representational conception of mind and knowledge, which allows the possibility of a comprehensive or persistent gap between appearance and reality. By contrast, early Chinese thinkers understand mind and knowledge primarily in terms of (...) competence or ability, not representation. Cognitive error amounts to a form of incompetence. Error is not explained as a failure to accurately represent the mind-independent reality due to misleading or illusory appearances. Instead, it can be explained metaphorically by appeal to part-whole relations: cognitive error typically occurs when agents incompetently respond to only part of their situation, rather than the whole. (shrink)
Kumārila’s commitment to the explanation of cognitive experiences not confined to valid cognition alone, allows a detailed discussion of border-line cases (such as doubt and error) and the admittance of absent entities as separate instances of cognitive objects. Are such absent entities only the negative side of positive entities? Are they, hence, fully relative (since a cow could be said to be the absent side of a horse and vice versa)? Through the analysis of a debated passage of the (...) Ślokavārttika , the present article proposes a reconstruction of Kumārila’s view of the relation between erroneous cognitions and cognitions of absence ( abhāva ), and considers the philosophical problem of the ontological status of absence. (shrink)
You are, I suspect, exceedingly good at knowing what you intend to do. In saying this I pay you no special compliment. Knowing what one intends is the normal state to be in. And this cries out for some explanation. How is it that we are so authoritative about our own intentions? There are two different approaches that one can take in answering this question. The first credits us with special perceptual powers which we use when we examine our own (...) minds. On this view we detect our own mental states in much the same way that we detect the state of the world around us; but the powers we direct inward are much less prone t o error than those we direct outwards. The alternative approach denies that there is such a thing as inward perception. On this view the whole idea the we detect our own mental states using some kind of internal perceptual apparatus is misguided; a wholly different account is needed. (shrink)
Error is protean, ubiquitous and crucial in scientific process. In this paper it is argued that understanding scientific process requires what is currently absent: an adaptable, context-sensitive functional role for error in science that naturally harnesses error identification and avoidance to positive, success-driven, science. This paper develops a new account of scientific process of this sort, error and success driving Self-Directed Anticipative Learning (SDAL) cycling, using a recent re-analysis of ape-language research as test example. The example (...) shows the limitations of other accounts of error, in particular Mayo’s (Error and the growth of experimental knowledge, 1996) error-statistical approach, and SDAL cycling shows how they can be fruitfully contextualised. (shrink)
One way to make an error using a complex demonstrative is to say ┌ that ψ is F ┐ when one should have said ┌ that φ is F ┐. By considering this kind of error I shall show that for judgments expressible using complex demonstratives the phenomenon of immunity to error through misidentification (Shoemaker 1968, 1970) (henceforth IEM) comes in two quite different varieties. Except in special cases, judgments expressible using complex demonstratives always possess one kind (...) of immunity but not the other. This is significant because, as I shall show, it is possible to make an analogous error using an unstructured indexical term by saying, for example, ┌ it is F here ┐ when one should have said ┌ it is F there ┐. Again there are two kinds of IEM for such judgments and, except in special cases, indexical judgments always possess one kind of IEM but not the other. This, I argue, shows that thoughts expressible using unstructured indexicals like ‘here’, ‘now’ and (probably) ‘I’ have the same structure as thoughts expressible using complex demonstratives – or, to put it a little more provocatively, indexicals are complex demonstratives (note, however, that my claim relates chiefly to the structure of the thoughts expressed, so if one were to retain a purely linguistic distinction between indexicals and complex demonstratives based on the difference in their surface form the claim I wish to make would not be affected).1 I shall start by examining the two different kinds of IEM possessed by judgments expressible using complex.. (shrink)
Abstract That current ideals of cognition impoverish experience is a classical observation, and complaint, of the early Frankfurt School. Adorno reacts to this phenomenon in several ways, among them his conception of metaphysical experiences. Metaphysical experiences are conventionally understood as promissory notes, as metaphors for rich experiences. This article takes a different view of metaphysical experiences. It discusses them in light of Adorno's notion that objects have priority in experience and of his further remark that metaphysical experiences are constituted by (...)error. It argues that rather than promissory notes or metaphors for rich experiences, metaphysical experiences are attempts at taking things literally. (shrink)
In this paper we shall address some issues concerning the relation between the content and the nature of perceptual experiences. More precisely, we shall ask whether the claim that perceptual experiences are by nature relational implies that they cannot be intentional. As we shall see, much depends in this respect on the way one understands the possibility for one to be wrong about the phenomenal nature of one’s own experience. We shall describe and distinguish a series of errors that can (...) occur in our introspective access to our perceptual experiences. We shall argue that once the nature of these different kinds of error are properly understood, the metaphysical claim that perceptual experiences are relational can be seen to be compatible with the view that they are intentional. Before presenting the argument, we should try to articulate some elements of an intentionalist approach concerning the role of experience in our relation to ourselves and to our environment. The picture should offer a motivation for the arguments that follow. (shrink)
The possibility of error is related to the existence a norm. Connections are spelled out to the notion of infallibility and to that of a modifying predicate, to traditional truth theories in connection with “truth of things”, as well as the primacy of the negative cases, for instance “ false friend”.
In 1910–11 Axel Hägerström introduced an emotive theory of ethics asserting moral propositions and valuations in general to be neither true nor false. However, it is less well known that he modified his theory in the following year, now making a distinction between what he called primary and secondary valuations. From 1912 onwards, he restricted his emotive theory to primary valuations only, and applied an error theory to secondary ones. According to Hägerström, secondary valuations state that objects have special (...) value properties, that we believe we become acquainted with in primary valuations. But, in fact, we do not have any such acquaintance. There are no, and cannot be any such, properties in objects. What we take to be a property is a projection of a feeling. Therefore, all secondary valuations are false. In 1917 he developed his theory further and distinguished between different types of secondary valuations with different structures. Yet he argued that they all are false. Hägerström's discussion is interesting because, among other reasons, it is historically a very early version of error theory in ethics. In a way it can also be said to be a precursor to later versions, e.g., John Mackie's (1946 and 1977). There are obvious resemblances between their accounts. Mackie's discussion is, of course, independent of Hägerström's. (shrink)
Robert Fogelin claimed there was an error in the logic of the Tractatus. I first cover his point here before going on to show that any error in this area derived from an even more fundamental one. Correcting that further error, moreover, does more than correct the logic of the Tractatus: it has repercussions for the metaphysics and theory of value found there, in line with later developments in Wittgenstein’s philosophy. In what follows I use (...) the Tractarian numbers to indicate the paragraphs spoken about. (shrink)
A main message from the causal modelling literature in the last several decades is that under some plausible assumptions, there can be statistically consistent procedures for inferring (features of) the causal structure of a set of random variables from observational data. But whether we can control the error probabilities with a finite sample size depends on the kind of consistency the procedures can achieve. It has been shown that in general, under the standard causal Markov and Faithfulness assumptions, the (...) procedures can only be pointwise but not uniformly consistent without substantial background knowledge. This implies the impossibility of choosing a finite sample size to control the worst case error probabilities. In this paper, I consider the simpler task of inferring causal directions when the skeleton of the causal structure is known, and establish a similarly negative result concerning the possibility of controlling error probabilities. Although the result is negative in form, it has an interesting positive implication for causal discovery methods. (shrink)
I discuss Burge's argument that our entitlement to self-knowledge consists in the constitutive relation between the second-order review of thoughts and the thoughts reviewed, and defend it against Peacocke's criticism. I then argue that though our entitlement to self-knowledge is neutral to different environments, as Burge claims, the consideration of Burge's own notion of brute error shows that Burge's effort to reconcile externalism and self-knowledge is not successful.
This paper clarifies how to be an error theorist about morality. It takes as its starting point John Mackie’s error theory of the categoricity of moral obligation, defending Mackie against objections from both naturalist moral realists and minimalists about moral discourse. However, drawing upon minimalist insights, it argues that Mackie’s focus on the ontological status of moral values is misplaced, and that the underlying dispute between error theorist and moralist is better conducted at the level of practical (...) reason. (shrink)
This essay reviews and defines avoidable medical error, malpractice and complication. The relevant ethical principles pertaining to unanticipated medical outcomes are identified. In light of these principles I critically review the moral culpability of the agents in each circumstance and the resulting obligations to patients, their families, and the health care system in general. While I touch on some legal implications, a full discussion of legal obligations and liability issues is beyond the scope of this paper.
The Bayesian theory is outlined and its status as a logic defended. In this it is contrasted with the development and extension of Neyman-Pearson methodology by Mayo in her recently published book (1996). It is shown by means of a simple counterexample that the rule of inference advocated by Mayo is actually unsound. An explanation of why error-probablities lead us to believe that they supply a sound rule is offered, followed by a discussion of two apparently powerful objections to (...) the Bayesian theory, one concerning old evidence and the other optional stopping. (shrink)
An important theme to have emerged from the new experimentalist movement is that much of actual scientific practice deals not with appraising full-blown theories but with the manifold local tasks required to arrive at data, distinguish fact from artifact, and estimate backgrounds. Still, no program for working out a philosophy of experiment based on this recognition has been demarcated. I suggest why the new experimentalism has come up short, and propose a remedy appealing to the practice of standard (...) class='Hi'>error statistics. I illustrate a portion of my proposal using Galison's (1987) experimental narrative on neutral currents. (shrink)
Material Falsity and Error in Descartes' Meditations approaches Descartes' Meditations as an intellectual journey, wherein Descartes' views develop and change as he makes new discoveries about self, God and matter. The first book to focus closely on Descartes' notion of material falsity, it shows how Descartes' account of material falsity and correspondingly his account of crucial notions such as truth, falsehood and error evolves according to the epistemic advances in the Meditations. It also offers important new insights on (...) the crucial role of Descartes' Third Meditation discussion of material falsity in advancing many subsequent arguments in the Meditations. This book will be of interest to those working on Descartes and early modern philosophy. It offers an independent reading on issues of perennial interest, such as Descartes' views on error, truth and falsehood. It also makes important contributions to topics that have been the focus of much recent scholarship, such as Descartes' ethics and his theodicy. Those working on the interface between medieval and modern philosophy will find the discussions on Descartes' debt to predecessors like Suárez and Augustine useful. (shrink)
In his ethical writings Aristotle restricts moral responsibility to those actions an agent performs voluntarily. Only voluntary actions are candidates for praise and blame, reward and punishment. Voluntary actions meet two conditions: they have their causal origin in the agent, and they are performed knowingly.1 In the Poetics Aristotle tells us that actions are the primary ingredient of tragedy, and that the pivotal action of an exemplary tragedy is an hamartia or error.2 An error, like Oedipus’ murder of (...) his father, is committed unknowingly, and so does not satisfy Aristotle’s epistemic condition for voluntary action. It would seem, therefore, that the heroes and heroines of tragedy, in Aristotle’s opinion, are simply not responsible for their deeds and the awful consequences of what they have done.3 Bad things happen to them. This conclusion is problematic. The difficulty appears once we consider the kinds of dramatic plots Aristotle prefers. Aristotle favours plots in which a good person’s reversal of fortune is brought about unintentionally by his own actions over plots in which the reversal occurs because of the agent’s bad character or by accident or external cause. (Poet. 1452a32-33; 1453a7-12)4 The choice of unknowing action rather than intentional wrongdoing or sheer accident raises the question of agent responsibility. Indeed, it seems intended to do so. In the finest kind of tragedy the moment of recognition depicts a character coming to understand what he has unknowingly done, and coming to understand that his own actions have precipitated his change in fortune. (Poet. 1452a29-33) That Aristotle requires a moment of recognition, in addition to a reversal of.. (shrink)
This paper argues, first, that recent studies of experimentation, most notably by Deborah Mayo, provide the conceptual resources to describe scientific discovery's early stages as error-probing processes. Second, it shows that this description yields greater understanding of those early stages, including the challenges that they pose, the research strategies associated with them, and their influence on the rest of the discovery process. Throughout, the paper examines the phenomenon of "chemical hormesis" (i.e., anomalous low-dose effects from toxic chemicals) as a (...) case study that is important not only for the biological sciences but also for contemporary public policy. The resulting analysis is significant for at least two reasons. First, by elucidating the importance of discovery's earliest stages, it expands previous accounts by philosophers such as William Wimsatt and Lindley Darden. Second, it identifies the discovery process as yet another philosophical topic on which the detailed studies of the "new experimentalists" can shed new light. (shrink)