In order to lie, you have to say something that you believe to be false. But lying is not simply saying what you believe to be false. Philosophers have made several suggestions for what the additional condition might be. For example, it has been suggested that the liar has to intend to deceive (Augustine 395, Bok 1978, Mahon 2006), that she has to believe that she will deceive (Chisholm and Feehan 1977), or that she has to warrant the truth of (...) what she says (Carson 2006). In this paper, I argue that none of the existing definitions of lying identify a necessary condition on lying. I claim that lying is saying what you believe to be false when you believe that the following norm of conversation is in effect: "Do not say what you believe to be false" (Grice 1989, 27). And I argue that this definition handles all of the counter-examples to the existing definitions. (shrink)
According to the traditional philosophical definition, you lie if and only if you say something that you believe to be false and you intend to deceive someone into believing what you say. However, philosophers have recently noted the existence of bald-faced lies, lies which are not intended to deceive anyone into believing what is said. As a result, many philosophers have removed deception from their definitions of lying. According to Jennifer Lackey, this is ‘an unhappy divorce’ because it precludes an (...) obvious explanation of the prima facie wrongness of lying. Moreover, Lackey claims that there is a sense of deception in which all lies are deceptive. In this paper, I argue that bald-faced lies are not deceptive on any plausible notion of deception. In addition, I argue that divorcing deception from lying may not be as unhappy a result as Lackey suggests. (shrink)
Deception has long been an important topic in philosophy. However, the traditional analysis of the concept, which requires that a deceiver intentionally cause her victim to have a false belief, rules out the possibility of much deception in the animal kingdom. Cognitively unsophisticated species, such as fireflies and butterflies, have simply evolved to mislead potential predators and/or prey. To capture such cases of “functional deception,” several researchers Machiavellian intelligence II, Cambridge University Press, Cambridge, pp 112–143, 1997; Searcy and Nowicki, The (...) evolution of animal communication, Princeton University Press, Princeton, 2005; Skyrms, Signals, Oxford University Press, Oxford 2010) have endorsed the broader view that deception only requires that a deceiver benefit from sending a misleading signal. Moreover, in order to facilitate game-theoretic study of deception in the context of Lewisian sender-receiver games, Brian Skyrms has proposed an influential formal analysis of this view. Such formal analyses have the potential to enhance our philosophical understanding of deception in humans as well as animals. However, as we argue in this paper, Skyrms’s analysis, as well as two recently proposed alternative analyses, are seriously flawed and can lead us to draw unwarranted conclusions about deception. (shrink)
According to the standard philosophical definition of lying, you lie if you say something that you believe to be false with the intent to deceive. Recently, several philosophers have argued that an intention to deceive is not a necessary condition on lying. But even if they are correct, it might still be suggested that the standard philosophical definition captures the type of lie that philosophers are primarily interested in (viz., lies that are intended to deceive). In this paper, I argue (...) that the standard philosophical definition is not adequate as a definition of deceptive lying either. I then suggest two plausible alternative definitions of this concept. (shrink)
Donald Davidson once suggested that a liar ?must intend to represent himself as believing what he does not?. In this paper I argue that, while Davidson was mistaken about lying in a few important respects, his main insight yields a very attractive definition of lying. Namely, you lie if and only if you say something that you do not believe and you intend to represent yourself as believing what you say. Moreover, I show that this Davidsonian definition can handle counter-examples (...) that undercut four prominent definitions of lying: viz., the traditional intend-to-deceive definition, Thomas Carson's definition, Don Fallis's definition, and Andreas Stokke's definition. (shrink)
Measures of epistemic utility are used by formal epistemologists to make determinations of epistemic betterness among cognitive states. The Brier rule is the most popular choice among formal epistemologists for such a measure. In this paper, however, we show that the Brier rule is sometimes seriously wrong about whether one cognitive state is epistemically better than another. In particular, there are cases where an agent gets evidence that definitively eliminates a false hypothesis, but where the Brier rule says that things (...) have become epistemically worse. Along the way to this ‘elimination experiment’ counter-example to the Brier rule as a measure of epistemic utility, we identify several useful monotonicity principles for epistemic betterness. We also reply to several potential objections to this counter-example. (shrink)
Several different Bayesian models of epistemic utilities (see, e. g., , , , ) have been used to explain why it is rational for scientists to perform experiments. In this paper, I argue that a model-suggested independently by Patrick Maher  and Graham Oddie -that assigns epistemic utility to degrees of belief in hypotheses provides the most comprehensive explanation. This is because this proper scoring rule (PSR) model captures a wider range of scientifically acceptable attitudes toward epistemic risk than the (...) other Bayesian models that have been proposed. I also argue, however, that even the PSR model places unreasonably tight restrictions on a scientist's attitude toward epistemic risk. As a result, such Bayesian models of epistemic utilities fail as normative accounts-not just as descriptive accounts (see, e. g., , )-of scientific inquiry. (shrink)
There are many philosophical questions surrounding the notion of lying. Is it ever morally acceptable to lie? Can we acquire knowledge from people who might be lying to us? More fundamental, however, is the question of what, exactly, constitutes the concept of lying. According to one traditional definition, lying requires intending to deceive (Augustine. (1952). Lying (M. Muldowney, Trans.). In R. Deferrari (Ed.), Treatises on various subjects (pp. 53?120). New York, NY: Catholic University of America). More recently, Thomas Carson (2006. (...) The definition of lying. Nous, 40, 284?306) has suggested that lying requires warranting the truth of what you do not believe. This paper examines these two prominent definitions and some cases that seem to pose problems for them. Importantly, theorists working on this topic fundamentally disagree about whether these problem cases are genuine instances of lying and, thus, serve as counterexamples to the definitions on offer. To settle these disputes, we elicited judgments about the proposed counterexamples from ordinary language users unfettered by theoretical bias. The data suggest that everyday speakers of English count bald-faced lies and proviso lies as lies. Thus, we claim that a new definition is needed to capture common usage. Finally, we offer some suggestions for further research on this topic and about the moral implications of our investigation into the concept of lying. (shrink)
In order to guide the decisions of real people who want to bring about good epistemic outcomes for themselves and others, we need to understand our epistemic values. In Knowledge in a Social World, Alvin Goldman has proposed an epistemic value theory that allows us to say whether one outcome is epistemically better than another. However, it has been suggested that Goldman's theory is not really an epistemic value theory at all because whether one outcome is epistemically better than another (...) partly depends on our non-epistemic interests. In this paper, I argue that an epistemic value theory that serves the purposes of social epistemology must incorporate non- epistemic interests in much the way that Goldman's theory does. In fact, I argue that Goldman's theory does not go far enough in this direction. In particular, the epistemic value of having a particular true belief should actually be weighted by how interested we are in the topic. (shrink)
We want to keep hackers in the dark about our passwords and our credit card numbers. We want to keep potential eavesdroppers in the dark about our private communications with friends and business associates. This need for secrecy raises important questions in epistemology (how do we do it?) and in ethics (should we do it?). In order to answer these questions, it would be useful to have a good understanding of the concept of keeping someone in the dark. Several philosophers (...) (e.g., Bok, 1983; Carson, 2010; Mahon, 2009; Scheppele, 1988) have analyzed this concept (or, equivalently, the concept of keeping secrets) in terms of concealing and/or withholding information. However, their analyses incorrectly exclude clear instances of keeping someone in the dark. And more important, they incorrectly focus on possible means of keeping someone in the dark rather than on what it is to keep someone in the dark. In this paper, I argue that you keep X in the dark about a proposition P if and only if you intentionally cause X not to have a true belief that P. In addition, I show how this analysis of keeping someone in the dark can be extended from a categorical belief model of epistemic states to a credence (or degree of belief) model. (shrink)
We all pursue epistemic goals as individuals. But we also pursue collective epistemic goals. In the case of many groups to which we belong, we want each member of the group - and sometimes even the group itself - to have as many true beliefs as possible and as few false beliefs as possible. In this paper, I respond to the main objections to the very idea of such collective epistemic goals. Furthermore, I describe the various ways that our collective (...) epistemic goals can come into conflict with each other. And I argue that we must appeal to pragmatic considerations in order to resolve such conflicts. (shrink)
Three of the major issues in information ethics – intellectual property, speech regulation, and privacy – concern the morality of restricting people’s access to certain information. Consequently, policies in these areas have a significant impact on the amount and types of knowledge that people acquire. As a result, epistemic considerations are critical to the ethics of information policy decisions (cf. Mill, 1978 ). The fact that information ethics is a part of the philosophy of information highlights this important connection with (...) epistemology. In this paper, I illustrate how a value-theoretic approach to epistemology can help to clarify these major issues in information ethics. However, I also identify several open questions about epistemic values that need to be answered before we will be able to evaluate the epistemic consequences of many information policies. (shrink)
In the Groundwork, Immanuel Kant famously argued that it would be self-defeating for everyone to follow a maxim of lying whenever it is to his or her advantage. In his recent book Signals, Brian Skyrms claims that Kant was wrong about the impossibility of universal deception. Skyrms argues that there are Lewisian signaling games in which the sender always sends a signal that deceives the receiver. I show here that these purportedly deceptive signals simply fail to make the receiver as (...) epistemically well off as she could have been. Since the receiver is not actually misled, Kant would not have considered these games to be examples of deception, much less universal deception. However, I argue that there is an important sense of deception, endorsed by Roderick Chisholm and Thomas Feehan in their seminal work on the topic, under which Skyrms has shown that universal deception is possible. (shrink)
In “How to Collaborate,” Paul Thagard tries to explain why there is so much collaboration in science, and so little collaboration in philosophy, by giving an epistemic cost-benefit analysis. In this paper, I argue that an adequate explanation requires a more fully developed epistemic value theory than Thagard utilizes. In addition, I offer an alternative to Thagard’s explanation of the lack of collaboration in philosophy. He appeals to its lack of a tradition of collaboration and to the a priori nature (...) of much philosophical research. I claim that philosophers rarely collaborate simply because they can usually get the benefits without paying the costs of actually collaborating. (shrink)
If knowledge is the norm of practical reasoning, then we should be able to alter people's behavior by affecting their knowledge as well as by affecting their beliefs. Thus, as Roy Sorensen (2010) suggests, we should expect to find people telling lies that target knowledge rather than just lies that target beliefs. In this paper, however, I argue that Sorensen's discovery of “knowledge-lies” does not support the claim that knowledge is the norm of practical reasoning. First, I use a Bayesian (...) framework to show that in each of Sorensen's examples, knowledge-lies alter people's behavior by affecting their beliefs. Second, I show that while we can imagine lies that target knowledge without targeting beliefs, they cannot alter people's behavior. In other words, knowledge-lies actually work (i.e., manipulate behavior) by targeting beliefs or they do not work at all. (shrink)
According to the traditional philosophical definition, you lie if and only if you assert what you believe to be false with the intent to deceive. However, several philosophers (e.g., Carson 2006, Sorensen 2007, Fallis 2009) have pointed out that there are lies that are not intended to deceive and, thus, that the traditional definition fails. In 2009, I suggested an alternative definition: you lie if and only if you say what you believe to be false when you believe that one (...) of Paul Grice's conversational norms (“Do not say what you believe to be false”) is in effect. Faulkner (forthcoming), Stokke (forthcoming), and Pruss (2012) have subsequently argued that my 2009 definition fails as well because it counts some statements that are clearly not lies as being lies. In this paper, I identify some additional counter-examples of this sort. But I argue that my 2009 definition can easily be revised to deal with such counter-examples once we clarify that the relevant norm is really against communicating something false rather than against merely saying it. Nevertheless, I show that even this revised version of my 2009 definition fails because it counts some statements that are lies as not being lies. Lies told by young children – which uncontroversially count as lies on the traditional philosophical definition – suggest that lying (as well as asserting in general) does not require believing that such a norm is in effect. Even so, I claim that, since all liars intend to do something that would violate this norm if it were in effect, there is a successful definition of lying that is at least in the spirit of my 2009 definition. (shrink)
This paper is about some of the ways in which people sometimes speak while be- ing indifferent toward what they say. We argue that what Harry Frankfurt called ‘bullshitting’ is a mode of speech marked by indifference toward inquiry, the coop- erative project of reaching truth in discourse. On this view bullshitting is character- ized by indifference toward the project of advancing inquiry by making progress on specific subinquiries, represented by so-called questions under discussion. This ac- count preserves the central (...) insight of Frankfurt’s influential analysis of bullshitting in seeing the characteristic of bullshitting as indifference toward truth and falsity. Yet we show that speaking with indifference toward truth is a wider phenomenon than the one Frankfurt identified. The account offered in this paper thereby agrees with various critics of Frankfurt who argue that bullshitting is compatible with not being indifferent toward the truth-value of one’s assertions. Further, we argue that, while bullshitting and lying are not mutually exclusive, most lies are not instances of bullshitting. The account thereby avoids the problem that Frankfurt’s view ulti- mately is insufficient to adequately distinguish bullshitting and lying. (shrink)
Several philosophers have used the framework of means/ends reasoning to explain the methodological choices made by scientists and mathematicians (see, e.g., Goldman 1999, Levi 1962, Maddy 1997). In particular, they have tried to identify the epistemic objectives of scientists and mathematicians that will explain these choices. In this paper, the framework of means/ends reasoning is used to study an important methodological choice made by mathematicians. Namely, mathematicians will only use deductive proofs to establish the truth of mathematical claims. In this (...) paper, I argue that none of the epistemic objectives of mathematicians that are currently on the table provide a satisfactory explanation of this rejection of probabilistic proofs. (shrink)
Wikipedia is having a huge impact on how a great many people gather information about the world. So, it is important for epistemologists and information scientists to ask whether people are likely to acquire knowledge as a result of having access to this information source. In other words, is Wikipedia having good epistemic consequences? After surveying the various concerns that have been raised about the reliability of Wikipedia, this article argues that the epistemic consequences of people using Wikipedia as a (...) source of information are likely to be quite good. According to several empirical studies, the reliability of Wikipedia compares favorably to the reliability of traditional encyclopedias. Furthermore, the reliability of Wikipedia compares even more favorably to the reliability of those information sources that people would be likely to use if Wikipedia did not exist. In addition, Wikipedia has a number of other epistemic virtues that arguably outweigh any deficiency in terms of reliability. Even so, epistemologists and information scientists should certainly be trying to identify changes to Wikipedia that will bring about even better epistemic consequences. This article suggests that to improve Wikipedia, we need to clarify what our epistemic values are and to better understand why Wikipedia works as well as it does. Somebody who reads Wikipedia is “rather in the position of a visitor to a public restroom,” says Mr. McHenry, Britannica’s former editor. “It may be obviously dirty, so that he knows to exercise great care, or it may seem fairly clean, so that he may be lulled into a false sense of security. What he certainly does not know is who has used the facilities before him.” One wonders whether people like Mr. McHenry would prefer there to be no public lavatories at all. The Economist. (shrink)
Prototypical instances of disinformation include deceptive advertising (in business and in politics), government propaganda, doctored photographs, forged documents, fake maps, internet frauds, fake websites, and manipulated Wikipedia entries. Disinformation can cause significant harm if people are misled by it. In order to address this critical threat to information quality, we first need to understand exactly what disinformation is. This paper surveys the various analyses of this concept that have been proposed by information scientists and philosophers (most notably, Luciano Floridi). It (...) argues that these analyses are either too broad (that is, that they include things that are not disinformation), or too narrow (they exclude things that are disinformation), or both. Indeed, several of these analyses exclude important forms of disinformation, such as true disinformation, visual disinformation, side-effect disinformation, and adaptive disinformation. After considering the shortcomings of these analyses, the paper argues that disinformation is misleading information that has the function of misleading. Finally, in addition to responding to Floridi’s claim that such a precise analysis of disinformation is not necessary, it briefly discusses how this analysis can help us develop techniques for detecting disinformation and policies for deterring its spread. (shrink)
Human beings regularly work together to get things done. In particular, people frequently collaborate on the production and dissemination of knowledge. For example, scientists often work together in teams to make new discoveries. How such collaborations produce knowledge, and how well they produce knowledge, are important questions for epistemology. In fact, several epistemologists have addressed such questions regarding collaborative scientific research.
Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one's beliefs. Furthermore, it seems that conditionalization and probabilism follow from a wide range of measures of inaccuracy. However, we argue that among the measures in the literature, there are some from which one can prove conditionalization, others from which one can prove probabilism, and none from (...) which one can prove both. Hence at present, the accuracy-based approach cannot underwrite both conditionalization and probabilism. (shrink)
The doctrinal paradox shows that aggregating individual judgments by taking a majority vote does not always yield a consistent set of collective judgments. Philip Pettit, Luc Bovens, and Wlodek Rabinowicz have recently argued for the epistemic superiority of an aggregation procedure that always yields a consistent set of judgments. This paper identifies several additional epistemic advantages of their consistency maintaining procedure. However, this paper also shows that there are some circumstances where the majority vote procedure is epistemically superior. The epistemic (...) value of maintaining consistency does not always outweigh the epistemic value of making true judgments. (shrink)
Reviewed Works:Reuben Hersh, Proving is Convincing and Explaining.Philip J. Davis, Visual Theorems.Gila Hanna, H. Niels Jahnke, Proof and Application.Daniel Chazan, High School Geometry Students' Justification for Their Views of Empirical Evidence and Mathematical Proof.
Reviewed Works:Reuben Hersh, Proving is Convincing and Explaining.Philip J. Davis, Visual Theorems.Gila Hanna, H. Niels Jahnke, Proof and Application.Daniel Chazan, High School Geometry Students' Justification for Their Views of Empirical Evidence and Mathematical Proof.
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
Mathematicians only use deductive proofs to establish that mathematical claims are true. They never use inductive evidence, such as probabilistic proofs, for this task. Don Fallis (1997 and 2002) has argued that mathematicians do not have good epistemic grounds for this complete rejection of probabilistic proofs. But Kenny Easwaran (2009) points out that there is a gap in this argument. Fallis only considered how mathematical proofs serve the epistemic goals of individual mathematicians. Easwaran suggests that deductive proofs might be epistemically (...) superior to probabilistic proofs because they are transferable. That is, one mathematician can give such a proof to another mathematician who can then verify for herself that the mathematical claim in question is true without having to rely at all on the testimony of the first mathematician. In this paper, I argue that collective epistemic goals are critical to understanding the methodological choices of mathematicians. But I argue that the collective epistemic goals promoted by transferability do not explain the complete rejection of probabilistic proofs. (shrink)
We consider the mission of the librarian as an information provider and the core value that gives this mission its social importance. Our focus here is on those issues that arise in relation to the role of the librarian as an information provider. In particular, we focus on questions of the selection and organization of information, which bring up issues of bias, neutrality, advocacy, and children's rights to access information.
Two sorts of connections between privacy and knowledge (or lack thereof) have been suggested in the philosophical literature. First, Alvin Goldman has suggested that protecting privacy typically leads to less knowledge being acquired. Second, several other philosophers (e.g. Parent, Matheson, Blaauw and Peels) have claimed that lack of knowledge is definitive of having privacy. In other words, someone not knowing something is necessary and sufficient for someone else having privacy about that thing. Or equivalently, someone knowing something is necessary and (...) sufficient for someone else losing privacy about that thing. In this paper, I argue that both of these suggestions are incorrect. I begin by arguing, contra Goldman, that protecting privacy often leads to more knowledge being acquired. I argue in the remainder of the paper, contra the defenders of the knowledge account of privacy, that someone knowing something is not necessary for someone else losing privacy about that thing. (shrink)
How can one verify the accuracy of recorded information (e.g., information found in books, newspapers, and on Web sites)? In this paper, I argue that work in the epistemology of testimony (especially that of philosophers David Hume and Alvin Goldman) can help with this important practical problem in library and information science. This work suggests that there are four important areas to consider when verifying the accuracy of information: (i) authority, (ii) independent corroboration, (iii) plausibility and support, and (iv) presentation. (...) I show how philosophical research in these areas can improve how information professionals go about teaching people how to evaluate information. Finally, I discuss several further techniques that information professionals can and should use to make it easier for people to verify the accuracy of information. (shrink)
An important issue for information ethics is how much control people should have over the dissemination of information that they have created. Since intellectual property policies have an impact on our welfare primarily because they have a huge impact on our ability to acquire knowledge, there is an important role for epistemology in resolving this issue. This paper discusses the various ways in which intellectual property policies can impact knowledge acquisition both positively and negatively. In particular, it looks at how (...) intellectual property policies can affect the amount of information that people create, the quality of that information, the accessibility of that information, the diversity of that information, and the locatability of that information. (shrink)
The digital divide refers to inequalities in access to information technology. One of the main reasons why the digital divide is an important issue is that access to information technology has a tremendous impact on people's ability to acquire knowledge. According to Alvin Goldman (1999), the project of social epistemology is to identify policies and practices that have good epistemic consequences. In this paper, I argue that this sort of approach to social epistemology can help us to decide on policies (...) for dealing with the digital divide. I argue, however, that Goldman's specific proposals for evaluating policies are not adequate. I make an alternative proposal based on the work of John Rawls (1971) on distributive justice. (shrink)