Prominent instances of anti-luck epistemology, in particular sensitivity and safety accounts of knowledge, introduce a modal condition on the pertinent belief in terms of closeness or similarity of possible worlds. Very roughly speaking, a belief must continue to be true in close possibilities in order to qualify as knowledge. Such closeness-accounts derive much support from their (alleged) ability to eliminate standard instances of epistemic luck as they appear in prominent Gettier-type examples. The article argues that there are new (...) Gettier-type examples which are grounded in “distant” epistemic luck. It is demonstrated that sensitivity and safety theories cannot handle such examples. (shrink)
It is widely thought that if knowledge requires sensitivity, knowledge is not closed because sensitivity is not closed. This paper argues that there is no valid argument from sensitivity failure to non-closure of knowledge. Sensitivity does not imply non-closure of knowledge. Closure considerations cannot be used to adjudicate between safety and sensitivity accounts of knowledge.
Fortunately for those of us who work on the topic, Ernie Sosa has devoted much of his (seemingly inexhaustible) intellectual energy to the problem of philosophical skepticism. And to great effect. With the three exceptions of Peter Unger, whose 1975 Ignorance: A Case for Scepticism is a grossly under-appreciated classic of epistemology; Timothy Williamson, whose 2000 Knowledge and its Limits is, I hope, on its way to being a less underappreciated classic; and Thomas Reid, I have benefitted more from Sosa’s (...) wrestlings with skepticism than from anyone else’s work on the topic. (shrink)
This paper surveys attempts in the recent literature to offer a modal condition on knowledge as a way of resolving the problem of scepticism. In particular, safety-based and sensitivity-based theories of knowledge are considered in detail, along with the anti-sceptical prospects of an explicitly anti-luck epistemology.
Knowledge closure is, roughly, the following claim: For every agent S and propositions P and Q, if S knows P, knows that P implies Q, and believes Q because it is so implied, then S knows Q. Almost every epistemologist believes that closure is true. Indeed, they often believe that it so obviously true that any theory implying its denial is thereby refuted. Some prominent epistemologists have nevertheless denied it, most famously Fred Dretske and Robert Nozick. There are closure advocates (...) who see other virtues in those accounts, however, and so who introduce revisions of one sort or another in order to preserve closure while maintaining their spirit. One popular approach is to replace the “sensitivity” constraint at the heart of both of those accounts with a “safety” constraint, as advocated by Timothy Williamson, Duncan Pritchard, Ernest Sosa, Stephen Luper, and others. The purpose of this essay is to show that this approach does not succeed: safety does not save closure. And neither does a popular variation on the safety theme, the safe-basis or safe-indicator account. (shrink)
In his earlier writings, Fred Dretske proposed an anti-skeptical strategy that is based on a rejection of the view that knowledge is closed under known entailment. This strategy is seemingly congenial with a sensitivity condition for knowledge, which is often associated with Dretske’s epistemology. However, it is not obvious how Dretske’s early account meshes with the information-theoretic view developed in Knowledge and the Flow of Information. One aim of this paper is to elucidate the connections between these accounts. First (...) I argue that, contrary to an objection raised by Christoph Jäger, the information-theoretic account is compatible with Dretske’s anti-skeptical strategy based on the rejection of closure. This strategy invokes the notion of channel conditions, which are roughly speaking those conditions that are necessary and jointly sufficient for a signal to carry information. I propose an interpretation of the account that is based on the idea that a signal’s carrying information requires that the channel conditions are stable. It is shown that the resulting account incorporates both a sensitivity condition and a safety condition for knowledge. Finally, I demonstrate how this proposal allows for knowledge of modally robust propositions without making its acquisition too easy, as simple safety accounts do. I end with a suggestion concerning the direction that future research should take, based on the fact that in its present form the information-theoretic account does not capture inferential knowledge. (shrink)
A number of prominent epistemologists claim that the principle of sensitivity “play[s] a starring role in the solution to some important epistemological problems”. I argue that traditional sensitivity accounts fail to explain even the most basic data that are usually considered to constitute their primary motivation. To establish this result I develop Gettier and lottery cases involving necessary truths. Since beliefs in necessary truths are sensitive by default, the resulting cases give rise to a serious explanatory problem for (...) the defenders of sensitivity accounts. It is furthermore argued that attempts to modally strengthen traditional sensitivity accounts to avoid the problem must appeal to a notion of safety—the primary competitor of sensitivity in the literature. The paper concludes that the explanatory virtues of sensitivity accounts are largely illusory. In the framework of modal epistemology, it is safety rather than sensitivity that does the heavy explanatory lifting with respect to Gettier cases, lottery examples, and other pertinent cases. (shrink)
In a recent contribution to this journal, Fernando Broncano-Berrocal defends the safety conception of knowledge against my counterexamples in Freitag 2014 by adding a new clause to the safety condition. In this brief reply, I argue that Broncano-Berrocal's modification cannot be plausibly understood as a natural development of the original safety idea and that, moreover, the resulting account of knowledge can be refuted by a slight alteration of my original examples.
Subjunctivitis is the doctrine that what is distinctive about knowledge is essential modal in character, and thus is captured by certain subjunctive conditionals. One principal formulation of subjunctivism invokes a ``sensitivity condition'' (Nozick, De Rose), the other invokes a ``safety condition'' (Sosa). It is shown in detail how defects in the sensitivity condition generate unwanted results, and that the virtues of that condition are merely apparent. The safety condition is untenable also, because it is too easily (...) satisfied. A powerful motivation for adopting subjunctivism would be that it provides a solution to the problem of misleading evidence, but in fact, it does not. (shrink)
This essay motivates a revised version of the epistemic condition of safety and then employs the revision to (i) challenge traditional conceptions of apriority, (ii) refute ‘strong privileged access’, and (iii) resolve a well-known puzzle about externalism and self-knowledge.
Richard Feldman and William Lycan have defended a view according to which a necessary condition for a doxastic agent to have knowledge is that the agent’s belief is not essentially based on any false assumptions. I call this the no-essential-false-assumption account, or NEFA. Peter Klein considers examples of what he calls “useful false beliefs” and alters his own account of knowledge in a way which can be seen as a refinement of NEFA. This paper shows that NEFA, even given Klein’s (...) refinement, is subject to counterexample: a doxastic agent may possess knowledge despite having an essential false assumption. Advocates of NEFA could simply reject the intuition that the example is a case of knowledge. However, if the example is interpreted as not being a case of knowledge, then it can be used as a potential counterexample against both safety and sensitivity views of knowledge. I also provide a further case which, I claim, is problematic for all of the accounts just mentioned. I then propose, briefly, an alternative account of knowledge which handles all these cases appropriately. (shrink)
: Duncan Pritchard has recently highlighted the problem of veritic epistemic luck and claimed that a safety‐based account of knowledge succeeds in eliminating veritic luck where virtue‐based accounts and process reliabilism fail. He then claims that if one accepts a safety‐based account, there is no longer a motivation for retaining a commitment to reliabilism. In this article, I delineate several distinct safety principles, and I argue that those that eliminate veritic luck do so only if at least (...) implicitly committed to reliabilism. (shrink)
A study was conducted of nanotechnology (NT) researchers’ views about ethics in relation to their work. By means of a purpose-built questionnaire, made available on the Internet, the study probed NT researchers’ general attitudes toward and beliefs about ethics in relation to NT, as well as their views about specific NT-related ethical issues. The questionnaire attracted 1,037 respondents from 13 U.S. university-based NT research facilities. Responses to key questionnaire items are summarized and noteworthy findings presented. For most respondents, the ethical (...) responsibilities of NT researchers are not limited to those related to safety and integrity in the laboratory. Most believe that NT researchers also have specific ethical responsibilities to the society in which their research is done and likely to be applied. NT appears to be one of the first areas of contemporary technoscientific activity in which a long-standing belief is being seriously challenged: the belief that society is solely responsible for what happens when a researcher’s work, viewed as neutral and merely enabling, is applied in a particular social context. Survey data reveal that most respondents strongly disagree with that paradigmatic belief. Finally, an index gauging NT researcher sensitivity to ethics and ethical issues related to NT was constructed. A substantial majority of respondents exhibited medium or high levels of sensitivity to ethics in relation to NT. Although most respondents view themselves as not particularly well informed about ethics in relation to NT, a substantial majority are aware of and receptive to ethical issues related to their work, and believe that these issues merit consideration by society and study by current and future NT practitioners. (shrink)
In a recent article in this journal, Wolfgang Freitag argues that Gettier-style cases that are based on the notion of “distant” epistemic luck cannot be ruled out as cases of knowledge by modal conditions such as safety or sensitivity. I argue that safety and sensitivity can be easily fixed and that Freitag provides no convincing reason for the existence of “distant” epistemic luck.
An account of the nature of knowledge must explain the value of knowledge. I argue that modal conditions, such as safety and sensitivity, do not confer value on a belief and so any account of knowledge that posits a modal condition as a fundamental constituent cannot vindicate widely held claims about the value of knowledge. I explain the implications of this for epistemology: We must either eschew modal conditions as a fundamental constituent of knowledge, or retain the modal (...) conditions but concede that knowledge is not more valuable than that which falls short of knowledge. This second horn—concluding that knowledge has no distinctive value—is unappealing since it renders puzzling why so much epistemological theorising focuses on knowledge, and why knowledge seems so important. (shrink)
Vogel argues that sensitivity accounts of knowledge are implausible because they entail that we cannot have any higher-level knowledge that our beliefs are true, not false. Becker and Salerno object that Vogel is mistaken because he does not formalize higher-level beliefs adequately. They claim that if formalized correctly, higher-level beliefs are sensitive, and can therefore constitute knowledge. However, these accounts do not consider the belief-forming method as sensitivity accounts require. If we take bootstrapping as the belief-forming method, as (...) the discussed cases suggest, then we face a generality problem. Our higher-level beliefs as formalized by Becker and Salerno turn out to be sensitive according to a wide reading of bootstrapping, but insensitive according to a narrow reading. This particular generality problem does not arise for the alternative accounts of process reliabilism and basis-relative safety. Hence, sensitivity accounts not only deliver opposite results given different formalizations of higher-level beliefs, but also for the same formalization, depending on how we interpret bootstrapping. Therefore, sensitivity accounts do not fail because they make higher-level knowledge impossible, as Vogel argues, and they do not succeed in allowing higher-level knowledge, as Becker and Salerno suggest. Rather, their problem is that they deliver far too heterogeneous results. (shrink)
Concussions in professional sports have received increased attention, which is partly attributable to evidence that found concussion incidence rates were much higher than previously thought. Further to this, professional hockey players articulated how their concussion symptoms affected their professional careers, interpersonal relationships, and qualities of life. Researchers are beginning to associate multiple/repeated concussions with Chronic Traumatic Encephalopathy, a structural brain injury that is characterized by tau protein deposits in distinct areas of the brain. Taken together, concussions impact many people in (...) the sporting community from current and former professional athletes and their families to medical and health professionals and researchers. In light of the growing awareness and sensitivity towards concussions, the purpose of this paper is to provide recommendations that are designed to improve player safety in professional hockey and address the ethical issues surrounding these suggestions. (shrink)
Many contemporary epistemologists hold that a subject S’s true belief that p counts as knowledge only if S’s belief that p is also, in some important sense, safe. I describe accounts of this safety condition from John Hawthorne, Duncan Pritchard, and Ernest Sosa. There have been three counterexamples to safety proposed in the recent literature, from Comesaña, Neta and Rohrbaugh, and Kelp. I explain why all three proposals fail: each moves fallaciously from the fact that S was at (...) epistemic risk just before forming her belief to the conclusion that S’s belief was formed unsafely. In light of lessons from their failure, I provide a new and successful counterexample to the safety condition on knowledge. It follows, then, that knowledge need not be safe. Safety at a time depends counterfactually on what would likely happen at that time or soon after in a way that knowledge does not. I close by considering one objection concerning higher-order safety. (shrink)
Contextualism is supposed to explain why the following argument for skepticism seems plausible: (1) I don’t know that I am not a bodiless brain-in-a-vat (BIV); (2) If I know I have hands, then I know I am not a bodiless BIV; (3) Therefore, I do not know I have hands. Keith DeRose claims that (1) and (2) are “initially plausible.” I claim that (1) is initially plausible only because of an implicit argument that stands behind it; it is not intuitively (...) plausible. The argument DeRose offers is based on the requirement of sensitivity, that is, on the idea that if you know something then you would not believe it if it were false. I criticize the sensitivity requirement thereby undercutting its support for (1) and the skeptical data that contextualism is meant to explain. While skepticism is not a plausible ground for contextualism, I argue that certain pragmatic considerations are. It’s plausible to think that to know something more evidence is required when more is at stake. The best way to handle skepticism is to criticize the arguments for it. We should not adopt contextualism as a means of accommodating skepticism even if there are other pragmatic reasons for being a contextualist about knowledge. (shrink)
Epistemic closure has been a central issue in epistemology over the last forty years. According to versions of the relevant alternatives and subjunctivist theories of knowledge, epistemic closure can fail: an agent who knows some propositions can fail to know a logical consequence of those propositions, even if the agent explicitly believes the consequence (having “competently deduced” it from the known propositions). In this sense, the claim that epistemic closure can fail must be distinguished from the fact that agents do (...) not always believe, let alone know, the consequences of what they know—a fact that raises the “problem of logical omniscience” that has been central in epistemic logic. This paper, part I of II, is a study of epistemic closure from the perspective of epistemic logic. First, I introduce models for epistemic logic, based on Lewis’s models for counterfactuals, that correspond closely to the pictures of the relevant alternatives and subjunctivist theories of knowledge in epistemology. Second, I give an exact characterization of the closure properties of knowledge according to these theories, as formalized. Finally, I consider the relation between closure and higher-order knowledge. The philosophical repercussions of these results and results from part II, which prompt a reassessment of the issue of closure in epistemology, are discussed further in companion papers. As a contribution to modal logic, this paper demonstrates an alternative approach to proving modal completeness theorems, without the standard canonical model construction. By “modal decomposition” I obtain completeness and other results for two non-normal modal logics with respect to new semantics. One of these logics, dubbed the logic of ranked relevant alternatives, appears not to have been previously identified in the modal logic literature. More broadly, the paper presents epistemology as a rich area for logical study. (shrink)
In "Mathematical Truth", Paul Benacerraf articulated an epistemological problem for mathematical realism. His formulation of the problem relied on a causal theory of knowledge which is now widely rejected. But it is generally agreed that Benacerraf was onto a genuine problem for mathematical realism nevertheless. Hartry Field describes it as the problem of explaining the reliability of our mathematical beliefs, realistically construed. In this paper, I argue that the Benacerraf Problem cannot be made out. There simply is no intelligible problem (...) that satisfies all of the constraints which have been placed on the Benacerraf Problem. The point generalizes to all arguments with the structure of the Benacerraf Problem aimed at realism about a domain meeting certain conditions. Such arguments include so-called "Evolutionary Debunking Arguments" aimed at moral realism. I conclude with some suggestions about the relationship between the Benacerraf Problem and the Gettier Problem. (shrink)
In an influential book, Gilbert Harman writes, "In explaining the observations that support a physical theory, scientists typically appeal to mathematical principles. On the other hand, one never seems to need to appeal in this way to moral principles [1977, 9 – 10]." What is the epistemological relevance of this contrast, if genuine? In this article, I argue that ethicists and philosophers of mathematics have misunderstood it. They have confused what I will call the justificatory challenge for realism about an (...) area, D – the challenge to justify our D-beliefs – with the reliability challenge for D-realism – the challenge to explain the reliability of our D-beliefs. Harman’s contrast is relevant to the first, but not, evidently, to the second. One upshot of the discussion is that genealogical debunking arguments are fallacious. Another is that indispensability considerations cannot answer the Benacerraf-Field epistemological challenge for mathematical realism. (shrink)
This paper looks at an argument strategy for assessing the epistemic closure principle. This is the principle that says knowledge is closed under known entailment; or (roughly) if S knows p and S knows that p entails q, then S knows that q. The strategy in question looks to the individual conditions on knowledge to see if they are closed. According to one conjecture, if all the individual conditions are closed, then so too is knowledge. I give a deductive argument (...) for this conjecture. According to a second conjecture, if one (or more) condition is not closed, then neither is knowledge. I give an inductive argument for this conjecture. In sum, I defend the strategy by defending the claim that knowledge is closed if, and only if, all the conditions on knowledge are closed. After making my case, I look at what this means for the debate over whether knowledge is closed. (shrink)
The central question of this article is how to combine counterfactual theories of knowledge with the notion of actuality. It is argued that the straightforward combination of these two elements leads to problems, viz. the problem of easy knowledge and the problem of missing knowledge. In other words, there is overgeneration of knowledge and there is undergeneration of knowledge. The combination of these problems cannot be solved by appealing to methods by which beliefs are formed. An alternative solution is put (...) forward. The key is to rethink the closeness relation that is at the heart of counterfactual theories of knowledge. (shrink)
Ernest Sosa and others have proposed a safety condition on knowledge: If S knows p, then in the nearest (non-actual) worlds in which S believes p, p is true.1 Colloquially, this is the idea that knowing requires not being easily mistaken. Here, I will argue that like another condition requiring a counterfactual relation between a subject’s belief and the world, viz. Robert Nozick’s sensitivity condition, safety leads, in certain cases, to the unacceptable result that knowledge is not (...) closed under known implication. (shrink)
This study describes the level of moral sensitivity among nursing students enrolled in a traditional baccalaureate nursing program and a master’s nursing program. Survey responses to the Modified Moral Sensitivity Questionnaire for Student Nurses from 250 junior, senior, and graduate students from one nursing school were analyzed. It was not possible to draw conclusions based on the tool. Moral category analysis showed students ranked the category structuring moral meaning highest and interpersonal orientation second. The moral issue ranking highest (...) was honesty, respect for the patient second, and third was responsibility to know the patient’s situation. Seniors agreed more often about the need to focus on patient safety. As students progress through the baccalaureate program and into the graduate program, their perspectives increasingly recognize the contextuality of moral issues. The results show a need to further develop a tool to measure moral sensitivity, using student understanding and perceptions of moral issues. (shrink)
Radio Frequency Identification (RFID) is quickly growing in its applications. A variety of uses for the technology are beginning to be developed, including chips which can be used in identification cards, in individual items, and for human applications, allowing a chip to be embedded under the skin. Such chips could provide numerous benefits ranging from day-to-day convenience to the increased ability of the federal government to adequately ensure the safety of its citizens. However, there are also valid concerns about (...) the potential of this technology to infringe on privacy, creating fears of a surveillance society. These are concerns that must be addressed quickly, with sensitivity to individual interests and societal welfare, allowing humanity to reap the benefits of convenience and safety without paying an unacceptable price in the loss of privacy. (shrink)
Product safety has always been one of the main problems in engineering ethics. At times it has been discussed as primarily a problem of engineering ethics. However the right to safety is one of the four fundamental consumer rights and so it is an important theme also in business ethics. At the same time the problem of product safety is inseparably connected with business effectiveness: how much can we spend on product safety without making our production (...) unprofitable?Below we will present a possible treatment of the safety problem in teaching business ethics to post-graduate industrial engineering students – as we deal with the problem in Tallinn Technical University. (shrink)
In Escherichia coli, the role of lacA, the third gene of the lactose operon, has remained an enigma. I suggest that its role is the consequence of the need for cells to have safety valves that protect them from the osmotic effect created by their permeases. Safety valves allow them to cope with the buildup of osmotic pressure under accidental transient conditions. Multidrug resistance (MDR) efflux, thus named because of our anthropocentrism, is ubiquitous. Yet, the formation of simple (...) leaks would result in futile influx/efflux cycles. Versatile modification enzymes with low sensitivity solve the problem if the modified metabolite is the one exported by MDR permeases. This may account for the pervasive presence of acetyl-transferases, such as LacA, associated to acetyl-metabolite exporters. This scenario of constraints imposed by efficient influx of metabolites provides us with a model that should be followed when constructing synthetic cells. (shrink)
In his précis of a recent book, Richard Joyce writes, “My contention…is that…any epistemological benefit-of-the-doubt that might have been extended to moral beliefs…will be neutralized by the availability of an empirically confirmed moral genealogy that nowhere…presupposes their truth.” Such reasoning – falling under the heading “Genealogical Debunking Arguments” – is now commonplace. But how might “the availability of an empirically confirmed moral genealogy that nowhere… presupposes” the truth of our moral beliefs “neutralize” whatever “epistemological benefit-of-the-doubt that might have been extended (...) to” them? In this article, I argue that there appears to be no satisfactory answer to this question. The problem is quite general, applying to all arguments with the structure of Genealogical Debunking Arguments aimed at realism about a domain meeting two conditions. The Benacerraf-Field Challenge for mathematical realism affords an important special case. (shrink)
Rachael Briggs and Daniel Nolan attempt to improve on Nozick’s tracking theory of knowledge by providing a modified, dispositional tracking theory. The dispositional theory, however, faces more problems than those previously noted by John Turri. First, it is not simply that satisfaction of the theory’s conditions is unnecessary for knowledge – it is insufficient as well. Second, in one important respect, the dispositional theory is a step backwards relative to the original tracking theory: the original but not the dispositional theory (...) can avoid Gettier-style counterexamples. Future attempts to improve the tracking theory would be wise to bear these problems in mind. (shrink)
Epistemic luck is a generic notion used to describe any of a number of ways in which it can be accidental, coincidental, or fortuitous that a person has a true belief. For example, one can form a true belief as a result of a lucky guess, as when one believes through guesswork that “C” is the right answer to a multiple-choice question and one’s belief just happens to be correct. One can form a true belief via wishful thinking; for example, (...) an optimist’s belief that it will not rain may luckily turn out to be correct, despite forecasts for heavy rain all day. One can reason from false premises to a belief that coincidentally happens to be true. One can accidentally arrive at a true belief through invalid or fallacious reasoning. And one can fortuitously arrive at a true belief from testimony that was intended to mislead but unwittingly reported the truth. In all of these cases, it is just a matter of luck that the person has a true belief. -/- Until the twenty-first century, there was nearly universal agreement among epistemologists that epistemic luck is incompatible with knowledge. Call this view “the incompatibility thesis.” In light of the incompatibility thesis, epistemic luck presents epistemologists with three distinct but related challenges. The first is that of providing an accurate analysis of knowledge (in terms of individually necessary and jointly sufficient conditions for “S knows that p,” where ‘S’ represents the knower and ‘p’ represents the proposition known). An adequate analysis of knowledge must succeed in specifying conditions that rule out all instances of knowledge-destroying epistemic luck. The second challenge is to resolve the skeptical paradox that the ubiquity of epistemic luck generates: As will become clear in section 2c, epistemic luck is an all-pervasive phenomenon. Coupling this fact with the incompatibility thesis entails that we have no propositional knowledge. The non-skeptical epistemologist must somehow reconcile the strong intuition that epistemic luck is not compatible with knowledge with the equally evident observation that it must be. The third challenge concerns the special skeptical threat that epistemic luck seems to pose for more reflective forms of knowledge, such as knowing that one knows. Each of these challenges will be explored in the present article. (shrink)
What modal relation must a fact bear to a belief in order for this belief to constitute knowledge of that fact? Externalists have proposed various answers, including some that combine externalism with contextualism. We shall find that various forms of externalism share a modal conception of “sensitivity” open to serious objections. Fortunately, the undeniable intuitive attractiveness of this conception can be explained through an easily confused but far preferable notion of “safety.” The denouement of our reflections, finally, will (...) be to show how replacing sensitivity with safety makes it possible to defend plain Moorean common sense against the spurious advantages over it claimed by skeptical, tracking, relevant-alternative, and contextualist accounts. (shrink)
That believing truly as a matter of luck does not generally constitute knowing has become epistemic commonplace. Accounts of knowledge incorporating this anti-luck idea frequently rely on one or another of a safety or sensitivity condition. Sensitivity-based accounts of knowledge have a well-known problem with necessary truths, to wit, that any believed necessary truth trivially counts as knowledge on such accounts. In this paper, we argue that safety-based accounts similarly trivialize knowledge of necessary truths and that (...) two ways of responding to this problem for safety, issuing from work by Williamson and Pritchard, are of dubious success. (shrink)
The paper discusses approaches to Epistemic Contextualism that model the satisfaction of the predicate ‘know’ in a given context C in terms of the notion of belief/fact-matching throughout a contextually specified similarity sphere of worlds that is centred on actuality. The paper offers three counterexamples to approaches of this type and argues that they lead to insurmountable difficulties. I conclude that what contextualists (and Subject-Sensitive Invariantists) have traditionally called the ‘epistemic standards’ of a given context C cannot be explicated in (...) terms of a contextually specified similarity sphere that is centred on actuality. The mentioned accounts of epistemic relevance and thus the corresponding accounts of the context-sensitivity (or subject-sensitivity) of ‘knows’ are to be rejected. (shrink)
In the epistemology of testimony it is often assumed that audiences are able to reliably recover asserted contents. In the philosophy of language this claim is contentious. This paper outlines one problem concerning the recovery of asserted contents, and argues that it prevents audiences from gaining testimonial knowledge in a range of cases. The recovery problem, in essence, is simply that due to the collective epistemic limitations of the speaker and audience speakers will, in certain cases, be insensitive to the (...) ways in which they may be misinterpreted. As a result audiences’ beliefs will often fail the safety and sensitivity conditions on knowledge. Once the problem has been outlined and distinguished from several related problems in the philosophy of language and the epistemology of testimony, a series of responses are considered. The first response holds that audiences possess defeaters in recovery problem cases, and thus wouldn’t form beliefs. The second response holds that the beliefs audiences form are very coarse grained, meaning they are not very vulnerable to failures of safety and sensitivity. The final response holds that the objects of speaker meaning are not propositional. All three responses are found to be unsatisfactory. (shrink)
Modal knowledge accounts like sensitivity or safety face a problem when it comes to knowing propositions that are necessarily true because the modal condition is always fulfilled no matter how random the belief forming method is. Pritchard models the anti-luck condition for knowledge in terms of the modal principle safety. Thus, his anti-luck epistemology faces the same problem when it comes to logical necessities. Any belief in a proposition that is necessarily true fulfills the anti-luck condition and, (...) therefore, qualifies as knowledge. Miščević shares Pritchard’s take on epistemic luck and acknowledges the resulting problem. In his intriguing article “Armchair Luck: Apriority, Intellection and Epistemic Luck” Miščević suggests solving the problem by supplementing safety with a virtue theoretic condition-“agent stability”-which he also spells out in modal terms. I will argue that Miščević is on the right track when he suggests adding a virtue-theoretic component to the safety condition. However, it should not be specified modally but rather in terms of performances that manifest competences. (shrink)
In his article ‘Better Communication Between Engineers and Managers: Some Ways to Prevent Many Ethically Hard Choices’1 Michael Davis analyzes the causes of the disaster in terms of a communications gap between management and engineers. When the communication between (representatives of) both groups breaks down, the organization is in (moral) trouble. Crucial information gets stuck somewhere in the organization prohibiting a careful discussion and weighing of all (moral) arguments. The resulting judgment has therefore little (moral) quality. In this paper I (...) would like to comment on some of Michael Davis’s interesting and thought-provoking insights and ideas. A company which implements Davis’s recommendations at least shows some sensitivity to organizational moral issues. But it might miss the point that moral trouble can also result from a common understanding between managers and engineers. Organizational members sometimes tend to be myopic with regard to safety issues. This paper: 1. describes different meanings of safety Managers and engineers, as Davis mentions, are sometimes willing to compromise quality, but do sacrifice safety. It is my contention that safety—in the sense of putting people’s lives on the line—will always be compromised, and that the discussion is about the ways to negotiate the risks./li 2. focuses on a shared understanding of the situation and its implications for safety Using examples from a case study I did on behalf of a commercial airline,2 I will try to show that it is not always the communications gap between managers and engineers which poses a risk to the stakeholders involved, but a common understanding of the situation. 3. focuses on a ‘timely concatenation of both active and latent failures’ as a cause for accidents I will argue that—in spite of our efforts to strengthen ethical consciousness and organizational practices—there will always be accidents. They are part of the human condition, since we cannot completely control the complexity of the situations in which they occur. One can, however, make them less costly. (shrink)
John MacFarlane explores how we might make sense of the idea that truth is relative. He provides new, satisfying accounts of parts of our thought and talk that have resisted traditional methods of analysis, including what we mean when we talk about what is tasty, what we know, what will happen, what might be the case, and what we ought to do.
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the (...) Epistemic Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure (...)sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
Introduction: externalism and modalism -- Externalism -- Modalism -- What should the theory do? -- What's missing? -- Process reliabilism -- Goldman's causal theory -- Goldman's discrimination requirement and relevant alternatives -- Process reliabilism and why it is not enough -- Implications for skepticism -- Sensitivity -- Nozick's subjunctive conditional theory of knowledge -- Methods : an important refinement -- Objections to nozicks theory -- Safety -- Motivating safety -- Weak and strong safety : luck and (...) induction -- Is safety necessary for knowledge? -- Luck revisited : safety requires a process reliability condition -- Is reliability compatible with knowledge of the denials of skeptical hypotheses? -- Knowledge : reliably formed sensitive true belief -- The theory -- Problems and clarifications -- Closure and the value problem -- Closure -- The value problem. (shrink)
In “Knowledge Under Threat” (Philosophy and Phenomenological Research 2012), Tomas Bogardus proposes a counterexample to the safety condition for knowledge. Bogardus argues that the case demonstrates that unsafe knowledge is possible. I argue that the case just corroborates the well-known requirement that modal conditions like safety must be relativized to methods of belief formation. I explore several ways of relativizing safety to belief-forming methods and I argue that none is adequate: if methods were individuated in those ways, (...)safety would fail to explain several much-discussed cases. I then propose a plausible externalist principle of method individuation. On the one hand, relativizing safety to belief-forming methods in the way suggested allows the defender of safety to account for the cases. On the other hand, it shows that the target known belief of Bogardus’s example is safe. Finally, I offer a diagnosis of a common error about the kind of cases that are typically considered potential counterexamples to the necessity of the epistemic condition: proponents of the alleged counterexamples mistake a strong condition that I call super-safety for safety. (shrink)
Many of the motivations in favor of contextualism about knowledge apply also to a contextualist approach to counterfactuals. I motivate and articulate such an approach, in terms of the context-sensitive 'all cases', in the spirit of David Lewis's contextualist view about knowledge. The resulting view explains intuitive data, resolves a puzzle parallel to the skeptical paradox, and renders safety and sensitivity, construed as counterfactuals, necessary conditions on knowledge.
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the (...) impossibility of no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – (...) as a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
Timothy Williamson has provided damaging counterexamples to Robert Nozick’s sensitivity principle. The examples are based on Williamson’s anti-luminosity arguments, and they show how knowledge requires a margin for error that appears to be incompatible with sensitivity. I explain how Nozick can rescue sensitivity from Williamson’s counterexamples by appeal to a specific conception of the methods by which an agent forms a belief. I also defend the proposed conception of methods against Williamson’s criticisms.