The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
Knowledge is more valuable than mere true belief. Many authors contend, however, that reliabilism is incompatible with this item of common sense. If a belief is true, adding that it was reliably produced doesn't seem to make it more valuable. The value of reliability is swamped by the value of truth. In Goldman and Olsson (2009), two independent solutions to the problem were suggested. According to the conditional probability solution, reliabilist knowledge is more valuable in virtue of being a (...) stronger indicator than mere true belief of future true belief. This article defends this solution against some objections. (shrink)
We reply to Christoph Jäger's criticism of the conditional probability solution (CPS) to the value problem for reliabilism due to Goldman and Olsson (2009). We argue that while Jäger raises some legitimate concerns about the compatibility of CPS with externalist epistemology, his objections do not in the end reduce the plausibility of that solution.
In an earlier paper, I objected to certain elements of L. Jonathan Cohen's account of corroborating testimony (Olsson ). In their response to my article, Bovens, Fitelson, Hartmann and Snyder () suggest some significant improvements of the probabilistic model which I used in assessing Cohen's theses and answer some additional questions which my study raised. More problematically, they also seek to defend Cohen against my criticism. I argue, in this reply, that their attempts in this direction are unsuccessful.
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J Artif (...) Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
People rely on reason to think about and navigate the abstract world of human relations in much the same way they rely on maps to study and traverse the physical world. Starting from that simple observation, renowned geographer Gunnar Olsson offers in Abysmal an astonishingly erudite critique of the way human thought and action have become deeply immersed in the rhetoric of cartography and how this cartographic reasoning allows the powerful to map out other people’s lives. A spectacular reading (...) of Western philosophy, religion, and mythology that draws on early maps and atlases, Plato, Kant, and Wittgenstein, Thomas Pynchon, Gilgamesh , and Marcel Duchamp, Abysmal is itself a minimalist guide to the terrain of Western culture. Olsson roams widely but always returns to the problems inherent in reason, to question the outdated assumptions and fixed ideas that thinking cartographically entails. A work of ambition, scope, and sharp wit, Abysmal will appeal to an eclectic audience—to geographers and cartographers, but also to anyone interested in the history of ideas, culture, and art. (shrink)
It is a widely accepted doctrine in epistemology that knowledge has greater value than mere true belief. But although epistemologists regularly pay homage to this doctrine, evidence for it is shaky. Is it based on evidence that ordinary people on the street make evaluative comparisons of knowledge and true belief, and consistently rate the former ahead of the latter? Do they reveal such a preference by some sort of persistent choice behavior? Neither of these scenarios is observed. Rather, epistemologists come (...) to this conclusion because they have some sort of conception or theory of what knowledge is, and they find reasons why people should rate knowledge, so understood, ahead of mere true belief. But what if these epistemological theories are wrong? Then the assumption that knowledge is more valuable than true belief might be in trouble. We don’t wish to take a firm position against the thesis that knowledge is more valuable than true belief. But we begin this paper by arguing that there is one sense of ‘know’ under which the thesis cannot be right. In particular, there seems to be a sense of ‘know’ in which it means, simply, ‘believe truly.’ If this is correct, then knowledge—in this weak sense of the term—cannot be more valuable than true belief. What evidence is there for a weak sense of ‘knowledge’ in which it is equivalent to ‘true belief’? Knowledge seems to contrast with ignorance. Not only do knowledge and ignorance contrast with one another but they seem to exhaust the alternatives, at least for a specified person and fact. Given a true proposition p, Diane either knows p or is ignorant of it. The same point can be expressed using rough synonyms of ‘know.’ Diane is either aware of (the fact that) p or is ignorant of it. She is either cognizant of p or ignorant of it. She either possesses the information that p or she is uninformed (ignorant) of it. To illustrate these suggestions, consider a case discussed by John Hawthorne (2002). If I ask you how many people in the room know that Vienna is the capital of Austria, you will tally up the number of people in the room who possess the information that Vienna is the capital of Austria.. (shrink)
A fundamental assumption of theories of decision-making is that we detect mismatches between intention and outcome, adjust our behavior in the face of error, and adapt to changing circumstances. Is this always the case? We investigated the relation between intention, choice, and introspection. Participants made choices between presented face pairs on the basis of attractiveness, while we covertly manipulated the relationship between choice and outcome that they experienced. Participants failed to notice conspicuous mismatches between their intended choice and the outcome (...) they were presented with, while nevertheless offering introspectively derived reasons for why they chose the way they did. We call this effect choice blindness. (shrink)
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their , i.e., their tendency to promote the acquisition of true beliefs (and impede the acquisition of false beliefs) in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because (1) we often don't know what beliefs are (...) actually true, and (2) even if we did, the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment (Laputa) is presented and put to some preliminary tests. (shrink)
Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual’s semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals have a small-world structure with short distances between words and high clustering. The distribution of links follows a power law truncated by an exponential cutoff, meaning that most words (...) are poorly connected and a minority of words has a high, although bounded, number of connections. Existing aggregate networks mirror the individual link distributions, and so they are not scale-free, as has been previously assumed; still, there are properties of individual structure that the aggregate networks do not reflect. A simulation of the new sampling process suggests that it can uncover the true structure of an individual’s semantic memory. (shrink)
The standard way of representing an epistemic state in formal philosophy is in terms of a set of sentences, corresponding to the agent’s beliefs, and an ordering of those sentences, reflecting how well entrenched they are in the agent’s epistemic state. We argue that this wide-spread representational view – a view that we identify as a “Quinean dogma” – is incapable of making certain crucial distinctions. We propose, as a remedy, that any adequate representation of epistemic states must also include (...) the agent’s research agenda, i.e., the list of question that are open or closed at any given point in time. If the argument of the paper is sound, a person’s questions and practical interests, on the one hand, and her beliefs and theoretical values, on the other, are more tightly interwoven than has previously been assumed to be the case in formal epistemology. (shrink)
Jonathan Cohen has claimed that in cases of witness agreement there is an inverse relationship between the prior probability and the posterior probability of what is being agreed: the posterior rises as the prior falls. As is demonstrated in this paper, this contention is not generally valid. In fact, in the most straightforward case exactly the opposite is true: a lower prior also means a lower posterior. This notwithstanding, there is a grain of truth to what Cohen is saying, as (...) there are special circumstances under which a thesis similar to his holds good. What characterises these circumstances is that they allow for the fact of agreement to be surprising. In making this precise, I draw on Paul Horwich's probabilistic analysis of surprise. I also consider a related claim made by Cohen concerning the effect of lowering the prior on the strength of corroboration. 1 Introduction 2 Cohen's claim 3 A counterexample 4 A weaker claim 5 A counterexample to the weaker claim 6 The grain of truth in Cohen's claim 7 Prior probability and strength of corroboration 8 Conclusion. (shrink)
Let us by ‘first-order beliefs’ mean beliefs about the world, such as the belief that it will rain tomorrow, and by ‘second-order beliefs’ let us mean beliefs about the reliability of first-order, belief-forming processes. In formal epistemology, coherence has been studied, with much ingenuity and precision, for sets of first-order beliefs. However, to the best of our knowledge, sets including second-order beliefs have not yet received serious attention in that literature. In informal epistemology, by contrast, sets of the latter kind (...) play an important role in some respectable coherence theories of knowledge and justification. In this paper, we extend the formal treatment of coherence to second-order beliefs. Our main conclusion is that while extending the framework to second-order beliefs sheds doubt on the generality of the notorious impossibility results for coherentism, another problem crops up that might be no less damaging to the coherentist project: facts of coherence turn out to be epistemically accessible only to agents who have a good deal of insight into matters external to their own belief states. (shrink)
We prove that four theses commonly associated with coherentism are incompatible with the representation of a belief state as a logically closed set of sentences. The result is applied to the conventional coherence interpretation of the AGM theory of belief revision, which appears not to be tenable. Our argument also counts against the coherentistic acceptability of a certain form of propositional holism. We argue that the problems arise as an effect of ignoring the distinction between derived and non-derived beliefs, and (...) we suggest that the kind of coherence relevant to epistemic justification is the coherence of non-derived beliefs. (shrink)
A measure of coherence is said to be truth conducive if and only if a higher degree of coherence results in a higher likelihood of truth. Recent impossibility results strongly indicate that there are no probabilistic coherence measures that are truth conducive. Indeed, this holds even if truth conduciveness is understood in a weak ceteris paribus sense. This raises the problem of how coherence could nonetheless be an epistemically important property. Our proposal is that coherence may be linked in a (...) certain way to reliability. We define a measure of coherence to be reliability conducive if and only if a higher degree of coherence results in a higher probability that the information sources are reliable. Restricting ourselves to the most basic case, we investigate which coherence measures in the literature are reliability conducive. It turns out that, while a number of measures fail to be reliability conducive, except possibly in a trivial and uninteresting sense, Shogenji's measure and several measures generated by Douven and Meijs's recipe are notable exceptions to this rule. (shrink)
Epistemologists can be divided into two camps: those who think that nothing short of certainty or (subjective) probability 1 can warrant assertion and those who disagree with this claim. This paper addressed this issue by inquiring into the problem of setting the probability threshold required for assertion in such a way that that the social epistemic good is maximized, where the latter is taken to be the veritistic value in the sense of Goldman (Knowledge in a social world, 1999). We (...) provide a Bayesian model of a test case involving a community of inquirers in a social network engaged in group deliberation regarding the truth or falsity of a proposition $p.$ p . Results obtained by means of computer simulation indicate that the certainty rule is optimal in the limit of inquiry and communication but that a lower threshold is preferable in less idealized cases. (shrink)
A problem occupying much contemporary epistemology is that of explaining why knowledge is more valuable than mere true belief. This paper provides an overview of this debate, starting with historical figures and early work. The contemporary debate in mainstream epistemology is then surveyed and some recent developments that deserve special attention are highlighted, including mounting doubts about the prospects for virtue epistemology to solve the value problem as well as renewed interest in classical and reliabilist‐externalist responses.
I challenge a cornerstone of the Gettier debate: that a proposed analysis of the concept of knowledge is inadequate unless it entails that people don’t know in Gettier cases. I do so from the perspective of Carnap’s methodology of explication. It turns out that the Gettier problem per se is not a fatal problem for any account of knowledge, thus understood. It all depends on how the account fares regarding other putative counter examples and the further Carnapian desiderata of exactness, (...) fruitfulness and simplicity. Carnap proposed his methodology more than a decade before Gettier’s seminal paper appeared, making the present solution to the problem a candidate for being the least ad hoc proposal on the market, one whose independent standing cannot be questioned, among solutions that depart from the usual method of revising a theory of knowledge in the light of counterexamples. As an illustration of the method at work, I reconstruct reliabilism as an attempt to provide an explication of the concept of knowledge. (shrink)
There has been much interest in group judgment and the so-called 'wisdom of crowds'. In many real world contexts, members of groups not only share a dependence on external sources of information, but they also communicate with one another, thus introducing correlations among their responses that can diminish collective accuracy. This has long been known, but it has-to date-not been examined to what extent different kinds of communication networks may give rise to systematically different effects on accuracy. We argue that (...) equations that relate group accuracy, individual accuracy, and group diversity are useful theoretical tools for understanding group performance in the context of research on group structure. In particular, these equations may serve to identify the kind of group structures that improve individual accuracy without thereby excessively diminishing diversity so that the net positive effect is an improvement even on the level of collective accuracy. Two experiments are reported where two structures are investigated from this perspective. It is demonstrated that the more constrained network outperforms the network with a free flow of information. (shrink)
The notion of epistemic coherence is interpreted as involving not only consistency but also stability. The problem how to consolidate a belief system, i.e., revise it so that it becomes coherent, is studied axiomatically as well as in terms of set-theoretical constructions. Representation theorems are given for subtractive consolidation (where coherence is obtained by deleting beliefs) and additive consolidation (where coherence is obtained by adding beliefs).
A group is in a state of pluralistic ignorance (PI) if, roughly speaking, every member of the group thinks that his or her belief or desire is different from the beliefs or desires of the other members of the group. PI has been invoked to explain many otherwise puzzling phenomena in social psychology. The main purpose of this article is to shed light on the nature of PI states – their structure, internal consistency and opacity – using the formal apparatus (...) of Dynamic Doxastic Logic, and also to study the sense in which such states are “fragile”, i.e. to identify plausible conditions under which a PI state cascades into a state of shared belief as the result of announcement. (shrink)
Several writers on animal ethics defend the abolition of most or all animal agriculture, which they consider an unethical exploitation of sentient non-human animals. However, animal agriculture can also be seen as a co-evolution over thousands of years, that has affected biology and behavior on the one hand, and quality of life of humans and domestic animals on the other. Furthermore, animals are important in sustainable agriculture. They can increase efficiency by their ability to transform materials unsuitable for human consumption (...) and by grazing areas that would be difficult to harvest otherwise. Grazing of natural pastures is essential for the pastoral landscape, an important habitat for wild flora and fauna and much valued by humans for its aesthetic value. Thus it seems that the environment gains substantially when animals are included in sustainable agricultural systems. But what about the animals themselves? Objections against animal agriculture often refer to the disrespect for animals’ lives, integrity, and welfare in present intensive animal production systems. Of the three issues at stake, neither integrity nor animal welfare need in principle be violated in carefully designed animal husbandry systems. The main ethical conflict seems to lie in the killing of animals, which is inevitable if the system is to deliver animal products. In this paper, we present the benefits and costs to humans and animals of including animals in sustainable agriculture, and discuss how to address some of the ethical issues involved. (shrink)
Although many libertarians share similar moral foundations, they disagree about whether the state can be justified. The most famous libertarian attempt to justify the state is that of Robert Nozick. This attempt has been criticized by, among others, the libertarian anarchist Murray Rothbard. In this article, Nozick’s theory and Rothbard’s critique are discussed, as well as some other attempts to justify the state from libertarian premises. Keeping the criticisms of those theories in mind, an alternative theory, which attempts to bypass (...) the criticisms, is put forward. This alternative theory explains how a state—most probably a nonminimal democratic state—can legitimately be formed in a condition of anarchy without violating anyone’s libertarian rights. One result of this is that the rights-based case for minarchism is severely weakened. (shrink)
Isaac Levi has claimed that our reliance on the testimony of others, and on the testimony of the senses, commonly produces inconsistency in our set of full beliefs. This happens if what is reported is inconsistent with what we believe to be the case. Drawing on a conception of the role of beliefs in inquiry going back to Dewey, Levi has maintained that the inconsistent belief corpus is a state of ``epistemic hell'': it is useless as a basis for inquiry (...) and deliberation. As he has also noticed, the compatibility of these two elements of his pragmatist epistemology could be called into question. For if inconsistency means hell, how can it ever be rational to enter that state, and on what basis could we attempt to regain consistency? Levi, nonetheless, has tried to show that the conflict is only apparent and that no changes of his theory are necessary. In the main part of the paper I argue, by contrast, that his attempts to reconcile these components of his view are unsuccessful.The conflict is real andthus presents a genuine threat to Deweyan pragmatism, as understood by Levi. After an attempt to pinpoint exactly where the source of the problem lies, I explore some possibilities for how to come to grips with it. I conclude that Levi can keep his fundamental thesis concerning the role of beliefs in inquiry and deliberation, provided that he (i) gives up the view that the agent can legitimately escape from inconsistency, and (ii) modifies his account of prediction alias deliberate expansion by acknowledging a third desideratum, besides probability and informational value, namely, not to cause permanent breakdown further down the line of inquiry. The result is a position which is more similar to Peter Gärdenfors's than is Levi's original theory, while retaining the basic insights of the latter. (shrink)
Aims The aim of this study was to examine if it is plausible to interpret the appearance of shame in a Swedish healthcare setting as a reaction to having one's honour wronged. Methods Using a questionnaire, we studied answers from a sample of long-term sick-listed patients who had experienced negative encounters (n=1628) and of these 64% also felt wronged. We used feeling wronged to examine emotional reactions such as feeling ashamed and made the assumption that feeling shame could be associated (...) with having one's honour wronged. In statistical analyses relative risks (RRs) were computed, adjusting for age, sex, disease-labelling, educational levels, as well as their 95% CI. Results Approximately half of those who had been wronged stated that they also felt shame and of those who felt shame, 93% (CI 91 to 95) felt that they had been wronged. The RR was 4.5 (CI 3.0 to 6.8) for shame when wronged. This can be compared with the other emotional reactions where the RRs were between 1.1 (CI 0.9 to 1.3)–1.4 (CI 1.2 to 1.7). We found no association between country of birth and feeling shame after having experienced negative encounters. Conclusions We found that the RR of feeling shame when wronged was significantly higher compared with other feelings. Along with theoretical considerations, and the specific types of negative encounters associated with shame, the results indicate that our research hypothesis might be plausible. We think that the results deserve to be used as point of departure for future research. (shrink)
According to the so?called swamping problem, reliabilist knowledge is no more valuable than mere true belief. In a paper called ?Reliabilism and the value of knowledge? (in Epistemic value, edited by A. Haddock, A. Millar, and D. H. Pritchard, pp. 19?41. Oxford: Oxford University Press, 2009), Alvin I. Goldman and myself proposed, among other things, a solution based on conditional probabilities. This approach, however, is heavily criticized by Jonathan L. Kvanvig in his paper ?The swamping problem redux: Pith and gist? (...) (in Social Epistemology, edited by A. Haddock, A. Millar, and D. H. Pritchard, pp. 89?111. Oxford: Oxford University Press, 2010). In the present article, I defend the conditional probability solution against Kvanvig?s objections. (shrink)
I argue that the analysis most capable of systematising our intuitions about coherence as a relation is one according to which a set of beliefs, A, coheres with another set, B, if and only if the set-theoretical union of A and B is a coherent set. The second problem I consider is the role of coherence in epistemic justification. I submit that there are severe problems pertaining to the idea, defended most prominently by Keith Lehrer, that justification amounts to coherence (...) with an acceptance system. Instead I advance a more dynamic approach according to which the problem of justification is the problem of how to merge new information with old coherently, a process which is seen to be closely connected with relational coherence. (shrink)