It is tempting to think that, if a person's beliefs are coherent, they are also likely to be true. This truth conduciveness claim is the cornerstone of the popular coherence theory of knowledge and justification. Erik Olsson's new book is the most extensive and detailed study of coherence and probable truth to date. Setting new standards of precision and clarity, Olsson argues that the value of coherence has been widely overestimated. Provocative and readable, Against Coherence will (...) make stimulating reading for epistemologists and anyone with a serious interest in truth. (shrink)
There are at least two different aspects of our rational evaluation of agents’ doxastic attitudes. First, we evaluate these attitudes according to whether they are supported by one’s evidence (substantive rationality). Second, we evaluate these attitudes according to how well they cohere with one another (structural rationality). In previous work, I’ve argued that substantive and structural rationality really are distinct, sui generis, kinds of rationality – call this view ‘dualism’, as opposed to ‘monism’, about rationality – by arguing that the (...) requirements of substantive and structural rationality can come into conflict. In this paper, I push the dialectic on this issue forward in two main ways. First, I argue that the most promising ways of resisting the diagnosis of my cases as conflicts still end up undermining monism in different ways. Second, supposing for the sake of argument that we should understand the cases as conflicts, I address the question of what we should do when such conflicts arise. I argue that, at least in a prominent kind of conflict case, the coherence requirements take precedence over the evidential requirements. (shrink)
Putnam (1975) infers from the success of a scientific theory to its approximate truth and the reference of its key term. Laudan (1981) objects that some past theories were successful, and yet their key terms did not refer, so they were not even approximately true. Kitcher (1993) replies that the past theories are approximately true because their working posits are true, although their idle posits are false. In contrast, I argue that successful theories which cohere with each other are approximately (...) true, and that their key terms refer. My position is immune to Laudan’s counterexamples to Putnam’s inference and yields a solution to a problem with Kitcher’s position. (shrink)
This paper aims to contribute to our understanding of the notion of coherence by explicating in probabilistic terms, step by step, what seem to be our most basic intuitions about that notion, to wit, that coherence is a matter of hanging or fitting together, and that coherence is a matter of degree. A qualitative theory of coherence will serve as a stepping stone to formulate a set of quantitative measures of coherence, each of which seems (...) to capture well the aforementioned intuitions. Subsequently it will be argued that one of those measures does better than the others in light of some more specific intuitions about coherence. This measure will be defended against two seemingly obvious objections. (shrink)
This target article presents a new computational theory of explanatory coherence that applies to the acceptance and rejection of scientific hypotheses as well as to reasoning in everyday life, The theory consists of seven principles that establish relations of local coherence between a hypothesis and other propositions. A hypothesis coheres with propositions that it explains, or that explain it, or that participate with it in explaining other propositions, or that offer analogous explanations. Propositions are incoherent with each other (...) if they are contradictory, Propositions that describe the results of observation have a degree of acceptability on their own. An explanatory hypothesis is acccpted if it coheres better overall than its competitors. The power of the seven principles is shown by their implementation in a connectionist program called ECHO, which treats hypothesis evaluation as a constraint satisfaction problem. Inputs about the explanatory relations are used to create a network of units representing propositions, while coherende and incoherence relations are encoded by excitatory and inbihitory links. ECHO provides an algorithm for smoothly integrating theory evaluation based on considerations of explanatory breadth, simplicity, and analogy. It has been applied to such important scientific cases as Lovoisier's argument for oxygen against the phlogiston theory and Darwin's argument for evolution against creationism, and also to cases of legal reasoning. The theory of explanatory coherence has implications for artificial intelligence, psychology, and philosophy. (shrink)
What is the relation between coherence and truth? This paper rejects numerous answers to this question, including the following: truth is coherence; coherence is irrelevant to truth; coherence always leads to truth; coherence leads to probability, which leads to truth. I will argue that coherence of the right kind leads to at least approximate truth. The right kind is explanatory coherence, where explanation consists in describing mechanisms. We can judge that a scientific theory (...) is progressively approximating the truth if it is increasing its explanatory coherence in two key respects: broadening by explaining more phenomena and deepening by investigating layers of mechanisms. I sketch an explanation of why deepening is a good epistemic strategy and discuss the prospect of deepening knowledge in the social sciences and everyday life. (shrink)
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
In 2012, the Geological Time Scale, which sets the temporal framework for studying the timing and tempo of all major geological, biological, and climatic events in Earth’s history, had one-quarter of its boundaries moved in a widespread revision of radiometric dates. The philosophy of metrology helps us understand this episode, and it, in turn, elucidates the notions of calibration, coherence, and consilience. I argue that coherence testing is a distinct activity preceding calibration and consilience, and I highlight the (...) value of discordant evidence and trade-offs scientists face in calibration. The iterative nature of calibration, moreover, raises the problem of legacy data. (shrink)
Evolutionary theory coheres with its neighboring theories, such as the theory of plate tectonics, molecular biology, electromagnetic theory, and the germ theory of disease. These neighboring theories were previously unconceived, but they were later conceived, and then they cohered with evolutionary theory. Since evolutionary theory has been strengthened by its several neighboring theories that were previously unconceived, it will be strengthened by infinitely many hitherto unconceived neighboring theories. This argument for evolutionary theory echoes the problem of unconceived alternatives. Ironically, however, (...) the former recommends that we take the realist attitude toward evolutionary theory, while the latter recommends that we take the antirealist attitude toward it. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in a restricted scenario (Olsson and Schubert 2007, Synthese 157:297–308). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general scenario where (...) it is any finite number of witnesses that give equivalent reports. It is shown that only the so-called Shogenji measure is reliability conducive in this scenario. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our (...) findings have implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
This paper examines how coherence of the contents of evidence affects the transmission of probabilistic support from the evidence to the hypothesis. It is argued that coherence of the contents in the sense of the ratio of the positive intersection reduces the transmission of probabilistic support, though this negative impact of coherence may be offset by other aspects of the relations among the contents. It is argued further that there is no broader conception of coherence whose (...) impact on the transmission of probabilistic support is never offset by other aspects of the relations among the contents. The paper also examines reasons for the contrary impression that coherence of the contents increases the transmission of probabilistic support, especially in the special case where the hypothesis to evaluate is the conjunction of the contents of evidence. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (asmeasured) of a set of testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that the Shogenji measure of coherence is reliability conducive in restricted scenarios (e.g., Olsson and Schubert, Synthese, 157:297–308, 2007). In this article, I investigate whether the Shogenji measure, or any other coherence measure, is reliability conducive in general. (...) An impossibility theorem is proved to the effect that this is not the case. I conclude that coherence is not reliability conducive. (shrink)
Seismic coherence is a routine measure of seismic reflection similarity for interpreters seeking structural boundary and discontinuity features that may be not properly highlighted on original amplitude volumes. One mostly wishes to use the broadest band seismic data for interpretation. However, because of thickness tuning effects, spectral components of specific frequencies can highlight features of certain thicknesses with higher signal-to-noise ratio than others. Seismic stratigraphic features may be buried in the full-bandwidth data, but can be “lit up” at certain (...) spectral components. For the same reason, coherence attributes computed from spectral voice components also often provide sharper images, with the “best” component being a function of the tuning thickness and the reflector alignment across faults. Although one can corender three coherence images using red-green-blue blending, a display of the information contained in more than three volumes in a single image is difficult. We address this problem by combining covariance matrices for each spectral component, adding them together, resulting in a “multispectral” coherence algorithm. The multispectral coherence images provide better images of channel incisement, and they are less noisy than those computed from the full bandwidth data. In addition, multispectral coherence also provides a significant advantage over RGB blended volumes. The information content from unlimited spectral voices can be combined into one volume, which is useful for a posteriori/further processing, such as color corendering display with other related attributes, such as petrophysics parameters plotted against a polychromatic color bar. We develop the value of multispectral coherence by comparing it with the RGB blended volumes and coherence computed from spectrally balanced, full-bandwidth seismic amplitude volume from a megamerge survey acquired over the Red Fork Formation of the Anadarko Basin, Oklahoma. (shrink)
Reliabilism is an intuitive and attractive view about epistemic justification. However, it has many well-known problems. I offer a novel condition on reliabilist theories of justification. This method coherence condition requires that a method be appropriately tested by appeal to a subject’s other belief-forming methods. Adding this condition to reliabilism provides a solution to epistemic circularity worries, including the bootstrapping problem.
We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifi es the connection of coherence and proper scoring rules to Bregman divergence.
Evidence on the coherence between emotion and facial expression in adults from laboratory experiments is reviewed. High coherence has been found in several studies between amusement and smiling; low to moderate coherence between other positive emotions and smiling. The available evidence for surprise and disgust suggests that these emotions are accompanied by their “traditional” facial expressions, and even components of these expressions, only in a minority of cases. Evidence concerning sadness, anger, and fear is very limited. For (...) sadness, one study suggests that high emotion–expression coherence may exist in specific situations, whereas for anger and fear, the evidence points to low coherence. Insufficient emotion intensity and inhibition of facial expressions seem unable to account for the observed dissociations between emotion and facial expression. (shrink)
Coherentism maintains that coherent beliefs are more likely to be true than incoherent beliefs, and that coherent evidence provides more confirmation of a hypothesis when the evidence is made coherent by the explanation provided by that hypothesis. Although probabilistic models of credence ought to be well-suited to justifying such claims, negative results from Bayesian epistemology have suggested otherwise. In this essay we argue that the connection between coherence and confirmation should be understood as a relation mediated by the causal (...) relationships among the evidence and a hypothesis, and we offer a framework for doing so by fitting together probabilistic models of coherence, confirmation, and causation. We show that the causal structure among the evidence and hypothesis is sometimes enough to determine whether the coherence of the evidence boosts confirmation of the hypothesis, makes no difference to it, or even reduces it. We also show that, ceteris paribus, it is not the coherence of the evidence that boosts confirmation, but rather the ratio of the coherence of the evidence to the coherence of the evidence conditional on a hypothesis. (shrink)
The question of coherence of rules for changing degrees of belief in the light of new evidence is studied, with special attention being given to cases in which evidence is uncertain. Belief change by the rule of conditionalization on an appropriate proposition and belief change by "probability kinematics" on an appropriate partition are shown to have like status.
Striving for a probabilistic explication of coherence, scholars proposed a distinction between agreement and striking agreement. In this paper I argue that only the former should be considered a genuine concept of coherence. In a second step the relation between coherence and reliability is assessed. I show that it is possible to concur with common intuitions regarding the impact of coherence on reliability in various types of witness scenarios by means of an agreement measure of (...) class='Hi'>coherence. Highlighting the need to separate the impact of coherence and specificity on reliability it is finally shown that a recently proposed vindication of the Shogenji measure qua measure of coherence vanishes. (shrink)
We discuss several features of coherent choice functions —where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of (...) probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility. (shrink)
Recent work on rationality has been increasingly attentive to “coherence requirements”, with heated debates about both the content of such requirements and their normative status (e.g., whether there is necessarily reason to comply with them). Yet there is little to no work on the metanormative status of coherence requirements. Metaphysically: what is it for two or more mental states to be jointly incoherent, such that they are banned by a coherence requirement? In virtue of what are some (...) putative requirements genuine and others not? Epistemologically: how are we to know which of the requirements are genuine and which aren’t? This paper tries to offer an account that answers these questions. On my account, the incoherence of a set of attitudinal mental states is a matter of its being (partially) constitutive of the mental states in question that, for any agent that holds these attitudes jointly, the agent is disposed, when conditions of full transparency are met, to give up at least one of the attitudes. (shrink)
In this paper, I offer a novel view of the coherence (or structural) requirements on belief and intention, according to which they are not norms, but rather principles describing how your belief and intention operate. I first argue, on the basis of the unintelligibility of some relevant attitudes-reports, that there are conditions under which you simply do not count as believing or intending unless your beliefs and intentions satisfy the requirements: the conditions under which all of your relevant attitudes (...) are occurrent or activated. I then argue that you are subject to a coherence requirement only if your relevant attitudes are all activated, for you are not necessarily subject to the charge of irrationality in violating a coherence requirement when your attitudes are not all activated. If so, however, you satisfy the coherence requirements whenever you are subject to them, which makes it plausible that the “requirements” should be seen as descriptive principles about belief and intention. [*published with open access]. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
This paper considers an application of work on probabilistic measures of coherence to inference to the best explanation. Rather than considering information reported from different sources, as is usually the case when discussing coherence measures, the approach adopted here is to use a coherence measure to rank competing explanations in terms of their coherence with a piece of evidence. By adopting such an approach IBE can be made more precise and so a major objection to this (...) mode of reasoning can be addressed. Advantages of the coherence - based approach are pointed out by comparing it with several other ways to characterize ‘ best explanation ’ and showing that it takes into account their insights while overcoming some of their problems. The consequences of adopting this approach for IBE are discussed in the context of recent discussions about the relationship between IBE and Bayesianism. (shrink)
This paper presents a conception of the self partially in terms of a particular notion of preference. It develops a coherentist account of when one's preferences are "authorized", or sanctioned as one's own, and presents a coherence theory of autonomous action. The view presented solves certain problems with hierarchical accounts of freedom, such as Harry Frankfurt's.
An education for cultural coherence tends to the child’s well-being through identity construction and maintenance. Critics charge that this sort of education will not bode well for the future autonomy of children. I will argue that culturally coherent education, provided there is no coercion, can lend itself to eventual autonomy and may assist minority children in countering the negative stereotypes and discrimination they face in the larger society. Further, I will argue that few individuals actually possess an entirely coherent (...) identity; rather, most of us possess hybrid identities that lend themselves to multiple, not necessarily conflicting allegiances. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) among testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in scenarios of equivalent testimonies (Olsson and Schubert 2007; Schubert, to appear). My aim is to investigate which coherence measures turn out to be reliability conducive in the more (...) general scenario where the testimonies do not have to be equivalent. It is shown that four measures are reliability conducive in the present scenario, all of which are ordinally equivalent to the Shogenji measure. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence. (shrink)
Over the years several non-equivalent probabilistic measures of coherence have been discussed in the philosophical literature. In this paper we examine these measures with respect to their empirical adequacy. Using test cases from the coherence literature as vignettes for psychological experiments we investigate whether the measures can predict the subjective coherence assessments of the participants. It turns out that the participants’ coherence assessments are best described by Roche’s coherence measure based on Douven and Meijs’ average (...) mutual support approach and the conditional probability. (shrink)
Locke has been accused of failing to have a coherent understanding of consciousness, since it can be identical neither to reflection nor to ordinary perception without contradicting other important commitments. I argue that the account of consciousness is coherent once we see that, for Locke, perceptions of ideas are complex mental acts and that consciousness can be seen as a special kind of self-referential mental state internal to any perception of an idea.
Nonmonotonic reasoning is often claimed to mimic human common sense reasoning. Only a few studies, though, have investigated this claim empirically. We report four experiments which investigate three rules of SYSTEMP, namely the AND, the LEFT LOGICAL EQUIVALENCE, and the OR rule. The actual inferences of the subjects are compared with the coherent normative upper and lower probability bounds derived from a non-infinitesimal probability semantics of SYSTEM P. We found a relatively good agreement of human reasoning and principles of nonmonotonic (...) reasoning. Contrary to the results reported in the ‘heuristics and biases’ tradition, the subjects committed relatively few upper bound violations (conjunction fallacies). (shrink)
In this paper I discuss the foundations of a formal theory of coherent and conservative belief change that is suitable to be used as a method for constructing iterated changes of belief, sensitive to the history of earlier belief changes, and independent of any form of dispositional coherence. I review various ways to conceive the relationship between the beliefs actually held by an agent and her belief change strategies, show the problems they suffer from, and suggest that belief states (...) should be represented by unary revision functions that take sequences of inputs. Three concepts of coherence implicit in current theories of belief change are distinguished: synchronic, diachronic and dispositional coherence. Diachronic coherence is essentially identified with what is known as conservatism in epistemology. The present paper elaborates on the philosophical motivation of the general framework; formal details and results are provided in a companion paper. (shrink)
In Book I of the Republic, or so I shall argue, Plato gives us a glimpse of sheer horror. In the character, beliefs, and desires of Thrasymachus, Plato aims to personify some of the most diabolical dangers that lurk in human nature. In this way, the role that Thrasymachus plays for Plato is akin to the role that for Hobbes is played by the bellum omnium contra omnes, the war of all against all, which would allegedly be the inevitable result (...) of a "state of nature", where human beings have no government to terrorize them into obedience. It is also akin to the role that for Kant is played by the "radical evil" that is allegedly an indelible feature of human nature itself. As I shall try to show, the desires that characterize Thrasymachus are of the kind that are described in Book IX as "lawless" desires, desire of the wild insane kind that dominate the tyrannical soul; and his beliefs systematically reflect these desires, in a way that has its own hideously coherent logic. (shrink)
Seismic coherence is commonly used to delineate structural and stratigraphic discontinuities. We generally use full-bandwidth seismic data to calculate coherence. However, some seismic stratigraphic features may be buried in this full-bandwidth data but can be highlighted by certain spectral components. Due to thin-bed tuning phenomena, discontinuities in a thicker stratigraphic feature may be tuned and thus better delineated at a lower frequency, whereas discontinuities in the thinner units may be tuned and thus better delineated at a higher frequency. (...) Additionally, whether due to the seismic data quality or underlying geology, certain spectral components exhibit higher quality over other components, resulting in correspondingly higher quality coherence images. Multispectral coherence provides an effective tool to exploit these observations. We have developed the performance of multispectral coherence using different spectral decomposition methods: the continuous wavelet transform, maximum entropy, amplitude volume technique, and spectral probe. Applications to a 3D seismic data volume indicate that multispectral coherence images are superior to full-bandwidth coherence, providing better delineation of incised channels with less noise. From the CWT experiments, we find that providing exponentially spaced CWT components provides better coherence images than equally spaced components for the same computation cost. The multispectral coherence image computed using maximum entropy spectral voices further improves the resolution of the thinner channels and small-scale features. The coherence from AVT data set provides continuous images of thicker channel boundaries but poor images of the small-scale features inside the thicker channels. Additionally, multispectral coherence computed using the nonlinear spectral probes exhibits more balanced and reveals clear small-scale geologic features inside the thicker channel. However, because amplitudes are not preserved in the nonlinear spectral probe decomposition, noise in the noisier shorter period components has an equal weight when building the covariance matrix, resulting in increased noise in the generated multispectral coherence images. (shrink)
In general, we wish to interpret the most broadband data possible. However, broadband data do not always provide the best insight for seismic attribute analysis. Obviously, spectral bands contaminated by noise should be eliminated. However, tuning gives rise to spectral bands with higher signal-to-noise ratios. To quantify geologic discontinuities in different scales, we combined spectral decomposition and coherence. Using spectral decomposition, the spectral amplitudes corresponding to a given scale geologic discontinuity, as well as some subtle features, which would otherwise (...) be buried within the broadband seismic response, can be extracted. We applied this workflow to a 3D land data volume acquired over the Tarim Basin, Northwest China, where karst forms the principle reservoirs. We found that channels are better illuminated around 18 Hz, while subtle discontinuities were better delineated around 25 Hz. (shrink)
A coherent story is a story that fits together well. This notion plays a central role in the coherence theory of justification and has been proposed as a criterion for scientific theory choice. Many attempts have been made to give a probabilistic account of this notion. A proper account of coherence must not start from some partial intuitions, but should pay attention to the role that this notion is supposed to play within a particular context. Coherence is (...) a property of an information set that boosts our confidence that its content is true ceteris paribus when we receive information from independent and partially reliable sources. We construct a measure cr that relies on hypothetical sources with certain idealized characteristics. A maximally coherent information set, i.e. a set with equivalent propositions, affords a maximal confidence boost. cr is the ratio of the actual confidence boost over the confidence boost that we would have received, had the information been presented in the form of maximally coherent information, ceteris paribus. This measure is functionally dependent on the degree of reliability r of the sources. We use cr to construct a coherence quasi-ordering over information sets S and S’: S is no less coherent than S’ just in case c_r is not smaller than c_r for any value of the reliability parameter. We show that, on our account, the coherence of the story about the world gives us a reason to believe that the story is true and that the coherence of a scientific theory, construed as a set of models, is a proper criterion for theory choice. (shrink)
In this paper I consider whether there is a measure of coherence that could be rightly claimed to generalize the notion of logical equivalence. I show that Fitelson’s (2003) proposal to that effect encounters some serious difficulties. Furthermore, there is reason to believe that no mutual-support measure could ever be suitable for the formalization of coherence as generalized logical equivalence. Instead, it appears that the only plausible candidate for such a measure is one of relative overlap. The measure (...) I propose in this paper is quite similar to Olsson’s (2002) proposal but differs from it by not being susceptible to the type of counterexample that Bovens and Hartmann (2003) have devised against it. (shrink)
For more than three decades, research into the psycholinguistics of pronoun interpretation has argued that hearers use various interpretation ‘preferences’ or ‘strategies’ that are associated with specific linguistic properties of antecedent expressions. This focus is a departure from the type of approach outlined in Hobbs (1979), who argues that the mechanisms supporting pronoun interpretation are driven predominantly by semantics, world knowledge and inference, with particular attention to how these are used to establish the coherence of a discourse. On the (...) basis of three new experimental studies, we evaluate a coherence-driven analysis with respect to four previously proposed interpretation biases—based on grammatical role parallelism, thematic roles, implicit causality, and subjecthood—and argue that the coherence-driven analysis can explain the underlying source of the biases and predict in what contexts evidence for each will surface. The results further suggest that pronoun interpretation is incrementally influenced by probabilistic expectations that hearers have regarding what coherence relations are likely to ensue, together with their expectations about what entities will be mentioned next, which, crucially, are conditioned on those coherence relations. (shrink)
Probabilistic coherence is not an absolute requirement of rationality; nevertheless, it is an ideal of rationality with substantive normative import. An idealized rational agent who avoided making implicit logical errors in forming his preferences would be coherent. In response to the challenge, recently made by epistemologists such as Foley and Plantinga, that appeals to ideal rationality render probabilism either irrelevant or implausible, I argue that idealized requirements can be normatively relevant even when the ideals are unattainable, so long as (...) they define a structure that links imperfect and perfect rationality in a way that enables us to make sense of the notion of better approximations to the ideal. I then analyze the notion of approximation to the ideal of coherence by developing a generalized theory of belief functions that allows for incoherence, and showing how such belief functions can be ordered with regard to greater or lesser coherence. (shrink)
Let libertarianism be the view that humans are capable of making decisions that are simultaneously undetermined and appropriately non-random. It’s often argued that this view is incoherent because indeterminacy entails randomness (of some appropriate kind). I argue here that the truth is just the opposite: the right kind of indeterminacy in our decisions actually entails appropriate non-randomness, so that libertarianism is coherent, and the question of whether it’s true reduces to the wide-open empirical question of whether certain of our decisions (...) (which I characterize here) are undetermined at the moment of choice. Moreover, the version of libertarianism developed here is entirely naturalistic and event-causal. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
The impossibility results of Bovens and Hartmann (2003) and Olsson (2005) call into question the strength of the connection between coherence and truth. As part of the inquiry into this alleged link, I define a notion of degree of truth-conduciveness, relevant for measuring the usefulness of coherence measures as rules-of-thumb for assigning probabilities in situations of partial knowledge. I use the concept to compare the viability of some of the measures of coherence that have been suggested so (...) far under different circumstances. It turns out that all of these, including the prior, are just about equally good in cases of very little knowledge. Nevertheless, there are differences in when they are applicable, and they also depart more from each other when more knowledge is added. CiteULike Connotea Del.icio.us What's this? (shrink)