It is tempting to think that, if a person's beliefs are coherent, they are also likely to be true. This truth conduciveness claim is the cornerstone of the popular coherence theory of knowledge and justification. Erik Olsson's new book is the most extensive and detailed study of coherence and probable truth to date. Setting new standards of precision and clarity, Olsson argues that the value of coherence has been widely overestimated. Provocative and readable, Against Coherence will make stimulating reading for (...) epistemologists and anyone with a serious interest in truth. (shrink)
It is a widely accepted doctrine in epistemology that knowledge has greater value than mere true belief. But although epistemologists regularly pay homage to this doctrine, evidence for it is shaky. Is it based on evidence that ordinary people on the street make evaluative comparisons of knowledge and true belief, and consistently rate the former ahead of the latter? Do they reveal such a preference by some sort of persistent choice behavior? Neither of these scenarios is observed. Rather, epistemologists come (...) to this conclusion because they have some sort of conception or theory of what knowledge is, and they find reasons why people should rate knowledge, so understood, ahead of mere true belief. But what if these epistemological theories are wrong? Then the assumption that knowledge is more valuable than true belief might be in trouble. We don’t wish to take a firm position against the thesis that knowledge is more valuable than true belief. But we begin this paper by arguing that there is one sense of ‘know’ under which the thesis cannot be right. In particular, there seems to be a sense of ‘know’ in which it means, simply, ‘believe truly.’ If this is correct, then knowledge—in this weak sense of the term—cannot be more valuable than true belief. What evidence is there for a weak sense of ‘knowledge’ in which it is equivalent to ‘true belief’? Knowledge seems to contrast with ignorance. Not only do knowledge and ignorance contrast with one another but they seem to exhaust the alternatives, at least for a specified person and fact. Given a true proposition p, Diane either knows p or is ignorant of it. The same point can be expressed using rough synonyms of ‘know.’ Diane is either aware of (the fact that) p or is ignorant of it. She is either cognizant of p or ignorant of it. She either possesses the information that p or she is uninformed (ignorant) of it. To illustrate these suggestions, consider a case discussed by John Hawthorne (2002). If I ask you how many people in the room know that Vienna is the capital of Austria, you will tally up the number of people in the room who possess the information that Vienna is the capital of Austria.. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their , i.e., their tendency to promote the acquisition of true beliefs (and impede the acquisition of false beliefs) in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because (1) we often don't know what beliefs are (...) actually true, and (2) even if we did, the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment (Laputa) is presented and put to some preliminary tests. (shrink)
Our beliefs and opinions are shaped by others, making our social networks crucial in determining what we believe to be true. Sometimes this is for the good because our peers help us form a more accurate opinion. Sometimes it is for the worse because we are led astray. In this context, we address via agent-based computer simulations the extent to which patterns of connectivity within our social networks affect the likelihood that initially undecided agents in a network converge on a (...) true opinion following group deliberation. The model incorporates a fine-grained and realistic representation of belief and trust, and it allows agents to consult outside information sources. We study a wide range of network structures and provide a detailed statistical analysis concerning the exact contribution of various network metrics to collective competence. Our results highlight and explain the collective risks involved in an overly networked or partitioned society. Specifically, we find that 96% of the variation in collective competence across networks can be attributed to differences in amount of connectivity and clustering, which are negatively correlated with collective competence. A study of bandwagon or “group think” effects indicates that both connectivity and clustering increase the probability that the network, wholly or partly, locks into a false opinion. Our work is interestingly related to Gerhard Schurz’s work on meta-induction and can be seen as broadly addressing a practical limitation of his approach. (shrink)
I challenge a cornerstone of the Gettier debate: that a proposed analysis of the concept of knowledge is inadequate unless it entails that people don’t know in Gettier cases. I do so from the perspective of Carnap’s methodology of explication. It turns out that the Gettier problem per se is not a fatal problem for any account of knowledge, thus understood. It all depends on how the account fares regarding other putative counter examples and the further Carnapian desiderata of exactness, (...) fruitfulness and simplicity. Carnap proposed his methodology more than a decade before Gettier’s seminal paper appeared, making the present solution to the problem a candidate for being the least ad hoc proposal on the market, one whose independent standing cannot be questioned, among solutions that depart from the usual method of revising a theory of knowledge in the light of counterexamples. As an illustration of the method at work, I reconstruct reliabilism as an attempt to provide an explication of the concept of knowledge. (shrink)
Epistemologists can be divided into two camps: those who think that nothing short of certainty or (subjective) probability 1 can warrant assertion and those who disagree with this claim. This paper addressed this issue by inquiring into the problem of setting the probability threshold required for assertion in such a way that that the social epistemic good is maximized, where the latter is taken to be the veritistic value in the sense of Goldman (Knowledge in a social world, 1999). We (...) provide a Bayesian model of a test case involving a community of inquirers in a social network engaged in group deliberation regarding the truth or falsity of a proposition $p.$ p . Results obtained by means of computer simulation indicate that the certainty rule is optimal in the limit of inquiry and communication but that a lower threshold is preferable in less idealized cases. (shrink)
A measure of coherence is said to be truth conducive if and only if a higher degree of coherence results in a higher likelihood of truth. Recent impossibility results strongly indicate that there are no probabilistic coherence measures that are truth conducive. Indeed, this holds even if truth conduciveness is understood in a weak ceteris paribus sense. This raises the problem of how coherence could nonetheless be an epistemically important property. Our proposal is that coherence may be linked in a (...) certain way to reliability. We define a measure of coherence to be reliability conducive if and only if a higher degree of coherence results in a higher probability that the information sources are reliable. Restricting ourselves to the most basic case, we investigate which coherence measures in the literature are reliability conducive. It turns out that, while a number of measures fail to be reliability conducive, except possibly in a trivial and uninteresting sense, Shogenji's measure and several measures generated by Douven and Meijs's recipe are notable exceptions to this rule. (shrink)
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their “veritistic value”, i.e., their tendency to promote the acquisition of true beliefs in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because we often don't know what beliefs are actually true, and even if we did, (...) the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment is presented and put to some preliminary tests. (shrink)
The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment that (...) has been developed in order to make our model computationally tractable we show that in our model inquirers are indeed sometimes better off from an epistemic perspective overestimating the reliability of their own inquiries. We also show, by contrast, that people are rarely better off overestimating the reliability of others. On the basis of these observations we formulate a novel hypothesis about the value of overconfidence. (shrink)
Research programs regularly compete to achieve the same goal, such as the discovery of the structure of DNA or the construction of a TEA laser. The more the competing programs share information, the faster the goal is likely to be reached, to society’s benefit. But the “priority rule”-the scientific norm according to which the first program to reach the goal in question must receive all the credit for the achievement-provides a powerful disincentive for programs to share information. How, then, is (...) the clash between social and individual interest resolved in scientific practice? This chapter investigates what Robert Merton called science’s “communist” norm, which mandates universal sharing of knowledge, and uses mathematical models of discovery to argue that a communist regime may be on the whole advantageous and fair to all parties, and so might be implemented by a social contract that all scientists would be willing to sign. (shrink)
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J Artif (...) Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
There has been much interest in group judgment and the so-called 'wisdom of crowds'. In many real world contexts, members of groups not only share a dependence on external sources of information, but they also communicate with one another, thus introducing correlations among their responses that can diminish collective accuracy. This has long been known, but it has-to date-not been examined to what extent different kinds of communication networks may give rise to systematically different effects on accuracy. We argue that (...) equations that relate group accuracy, individual accuracy, and group diversity are useful theoretical tools for understanding group performance in the context of research on group structure. In particular, these equations may serve to identify the kind of group structures that improve individual accuracy without thereby excessively diminishing diversity so that the net positive effect is an improvement even on the level of collective accuracy. Two experiments are reported where two structures are investigated from this perspective. It is demonstrated that the more constrained network outperforms the network with a free flow of information. (shrink)
The standard way of representing an epistemic state in formal philosophy is in terms of a set of sentences, corresponding to the agent’s beliefs, and an ordering of those sentences, reflecting how well entrenched they are in the agent’s epistemic state. We argue that this wide-spread representational view – a view that we identify as a “Quinean dogma” – is incapable of making certain crucial distinctions. We propose, as a remedy, that any adequate representation of epistemic states must also include (...) the agent’s research agenda, i.e., the list of question that are open or closed at any given point in time. If the argument of the paper is sound, a person’s questions and practical interests, on the one hand, and her beliefs and theoretical values, on the other, are more tightly interwoven than has previously been assumed to be the case in formal epistemology. (shrink)
Much of what we believe we know, we know through the testimony of others. While there has been long-standing evidence that people are sensitive to the characteristics of the sources of testimony, for example in the context of persuasion, researchers have only recently begun to explore the wider implications of source reliability considerations for the nature of our beliefs. Likewise, much remains to be established concerning what factors influence source reliability. In this paper, we examine, both theoretically and empirically, the (...) implications of using message content as a cue to source reliability. We present a set of experiments examining the relationship between source information and message content in people's responses to simple communications. The results show that people spontaneously revise their beliefs in the reliability of the source on the basis of the expectedness of a source's claim and, conversely, adjust message impact by perceived reliability; hence source reliability and message content have a bi-directional relationship. The implications are discussed for a variety of psychological, philosophical and political issues such as belief polarization and dual-route models of persuasion. (shrink)
If you believe more things you thereby run a greater risk of being in error than if you believe fewer things. From the point of view of avoiding error, it is best not to believe anything at all, or to have very uncommitted beliefs. But considering the fact that we all in fact do entertain many specific beliefs, this recommendation is obviously in flagrant dissonance with our actual epistemic practice. Let us call the problem raised by this apparent conflict the (...) Addition Problem. In this paper we will find reasons to reject a particular premise used in the formulation of the Addition Problem, namely, the fundamental premise according to which believing more things increases the risk of error. As we will see, acquiring more beliefs need not decrease the probability of the whole, and hence need not increase the risk of error. In fact, more beliefs can mean an increase in the probability of the whole and a corresponding decrease in the risk of error. We will consider the Addition Problem as it arises in the context of the coherence theory of epistemic justification, while keeping firmly in mind that the point we wish to make is of epistemological importance also outside the specific coherentist dispute. The problem of determining exactly how the probability of the whole system depends on such factors as coherence, reliability and independence will be seen to open up an interesting area of research in which the theory of conditional independence structures is a helpful tool. (shrink)
A problem occupying much contemporary epistemology is that of explaining why knowledge is more valuable than mere true belief. This paper provides an overview of this debate, starting with historical figures and early work. The contemporary debate in mainstream epistemology is then surveyed and some recent developments that deserve special attention are highlighted, including mounting doubts about the prospects for virtue epistemology to solve the value problem as well as renewed interest in classical and reliabilist‐externalist responses.
There is an emerging consensus in the literature on probabilistic coherence that such coherence cannot be truth conducive unless the information sources providing the cohering information are individually credible and collectively independent. Furthermore, coherence can at best be truth conducive in a ceteris paribus sense. Bovens and Hartmann have argued that there cannot be any measure of coherence that is truth conducive even in this very weak sense. In this paper, I give an alternative impossibility proof. I provide a relatively (...) detailed comparison of the two results, which turn out to be logically unrelated, and argue that my result answers a question raised by Bovens and Hartmann’s study. Finally, I discuss the epistemological ramifications of these findings and try to make plausible that a shift to an explanatory framework such as Thagard’s is unlikely to turn the impossibility into a possibility. (shrink)
Jonathan Cohen has claimed that in cases of witness agreement there is an inverse relationship between the prior probability and the posterior probability of what is being agreed: the posterior rises as the prior falls. As is demonstrated in this paper, this contention is not generally valid. In fact, in the most straightforward case exactly the opposite is true: a lower prior also means a lower posterior. This notwithstanding, there is a grain of truth to what Cohen is saying, as (...) there are special circumstances under which a thesis similar to his holds good. What characterises these circumstances is that they allow for the fact of agreement to be surprising. In making this precise, I draw on Paul Horwich's probabilistic analysis of surprise. I also consider a related claim made by Cohen concerning the effect of lowering the prior on the strength of corroboration. 1 Introduction 2 Cohen's claim 3 A counterexample 4 A weaker claim 5 A counterexample to the weaker claim 6 The grain of truth in Cohen's claim 7 Prior probability and strength of corroboration 8 Conclusion. (shrink)
We prove that four theses commonly associated with coherentism are incompatible with the representation of a belief state as a logically closed set of sentences. The result is applied to the conventional coherence interpretation of the AGM theory of belief revision, which appears not to be tenable. Our argument also counts against the coherentistic acceptability of a certain form of propositional holism. We argue that the problems arise as an effect of ignoring the distinction between derived and non-derived beliefs, and (...) we suggest that the kind of coherence relevant to epistemic justification is the coherence of non-derived beliefs. (shrink)
We reply to Christoph Jäger's criticism of the conditional probability solution (CPS) to the value problem for reliabilism due to Goldman and Olsson (2009). We argue that while Jäger raises some legitimate concerns about the compatibility of CPS with externalist epistemology, his objections do not in the end reduce the plausibility of that solution.
Knowledge is more valuable than mere true belief. Many authors contend, however, that reliabilism is incompatible with this item of common sense. If a belief is true, adding that it was reliably produced doesn't seem to make it more valuable. The value of reliability is swamped by the value of truth. In Goldman and Olsson (2009), two independent solutions to the problem were suggested. According to the conditional probability solution, reliabilist knowledge is more valuable in virtue of being a stronger (...) indicator than mere true belief of future true belief. This article defends this solution against some objections. (shrink)
This article is concerned with a statistical proposal due to James R. Beebe for how to solve the generality problem for process reliabilism. The proposal is highlighted by Alvin I. Goldman as an interesting candidate solution. However, Goldman raises the worry that the proposal may not always yield a determinate result. We address this worry by proving a dilemma: either the statistical approach does not yield a determinate result or it leads to trivialization, i.e. reliability collapses into truth (and anti-reliability (...) into falsehood). Various strategies for avoiding this predicament are considered, including revising the statistical rule or restricting its application to natural kinds. All amendments are seen to have serious problems of their own. We conclude that reliabilists need to look elsewhere for a convincing solution to the generality problem. (shrink)
Let us by ‘first-order beliefs’ mean beliefs about the world, such as the belief that it will rain tomorrow, and by ‘second-order beliefs’ let us mean beliefs about the reliability of first-order, belief-forming processes. In formal epistemology, coherence has been studied, with much ingenuity and precision, for sets of first-order beliefs. However, to the best of our knowledge, sets including second-order beliefs have not yet received serious attention in that literature. In informal epistemology, by contrast, sets of the latter kind (...) play an important role in some respectable coherence theories of knowledge and justification. In this paper, we extend the formal treatment of coherence to second-order beliefs. Our main conclusion is that while extending the framework to second-order beliefs sheds doubt on the generality of the notorious impossibility results for coherentism, another problem crops up that might be no less damaging to the coherentist project: facts of coherence turn out to be epistemically accessible only to agents who have a good deal of insight into matters external to their own belief states. (shrink)
I argue that the analysis most capable of systematising our intuitions about coherence as a relation is one according to which a set of beliefs, A, coheres with another set, B, if and only if the set-theoretical union of A and B is a coherent set. The second problem I consider is the role of coherence in epistemic justification. I submit that there are severe problems pertaining to the idea, defended most prominently by Keith Lehrer, that justification amounts to coherence (...) with an acceptance system. Instead I advance a more dynamic approach according to which the problem of justification is the problem of how to merge new information with old coherently, a process which is seen to be closely connected with relational coherence. (shrink)
According to the so?called swamping problem, reliabilist knowledge is no more valuable than mere true belief. In a paper called ?Reliabilism and the value of knowledge? (in Epistemic value, edited by A. Haddock, A. Millar, and D. H. Pritchard, pp. 19?41. Oxford: Oxford University Press, 2009), Alvin I. Goldman and myself proposed, among other things, a solution based on conditional probabilities. This approach, however, is heavily criticized by Jonathan L. Kvanvig in his paper ?The swamping problem redux: Pith and gist? (...) (in Social Epistemology, edited by A. Haddock, A. Millar, and D. H. Pritchard, pp. 89?111. Oxford: Oxford University Press, 2010). In the present article, I defend the conditional probability solution against Kvanvig?s objections. (shrink)
The notion of epistemic coherence is interpreted as involving not only consistency but also stability. The problem how to consolidate a belief system, i.e., revise it so that it becomes coherent, is studied axiomatically as well as in terms of set-theoretical constructions. Representation theorems are given for subtractive consolidation (where coherence is obtained by deleting beliefs) and additive consolidation (where coherence is obtained by adding beliefs).
In a legendary technical report, the Google founders sketched a wisdom-of-crowds justification for PageRank arguing that the algorithm, by aggregating incoming links to webpages in a sophisticated way, tracks importance on the web. On this reading of the report, webpages that have a high impact as measured by PageRank are supposed to be important webpages in a sense of importance that is not reducible to mere impact or popularity. In this paper, we look at the state of the art regarding (...) the more precise statement of the thesis that PageRank and other similar in-link-based ranking algorithms can be justified by reference to the wisdom of crowds. We argue that neither the influential preferential attachment models due to Barabási and Albert in nor the recent model introduced by Masterton et al. in allows for a satisfactory wisdom-of-crowds justification of PageRank. As a remedy, we suggest that future work should explore “dual models” of linking on the web, i.e., models that combine the two previous approaches. Dual models view links as being attracted to both popularity and importance. (shrink)
A representation theorem is obtained for contraction operators that are based on Levi's recent proposal that selection functions should be applied to the set of saturatable contractions, rather than to maximal subsets as in the AGM framework. Furthermore, it is shown that Levi's proposal to base the selection on a weakly monotonic measure of informational value guarantees the satisfaction of both of Gärdenfors' supplementary postulates for contraction. These results indicate that Levi has succeeded in constructing a well-behaved operation of contraction (...) that does not satisfy the postulate of recovery. (shrink)
Several distinguished philosophers have argued that since the state of affairs where nothing exists is the simplest and least arbitrary of all cosmological possibilities, we have reason to be surprised that there is in fact a non-empty universe. We review this traditional argument, and defend it against two recent criticisms put forward by Peter van Inwagen and Derek Parfit. Finally, we argue that the traditional argument nevertheless needs reformulation, and that the cogency of the reformulated argument depends partly on whether (...) there are certain conceptual limitations to what a person can hypothetically doubt. (shrink)