Conceivability is an important source of our beliefs about what is possible; inconceivability is an important source of our beliefs about what is impossible. What are the connections between the reliability of these sources? If one is reliable, does it follow that the other is also reliable? The central contention of this paper is that suitably qualified the reliability of inconceivability implies the reliability of conceivability, but the reliability of conceivability fails to imply the reliability (...) of inconceivability. (shrink)
Rumors, for better or worse, are an important element of public discourse. The present paper focuses on rumors as an epistemic phenomenon rather than as a social or political problem. In particular, it investigates the relation between the mode of transmission and the reliability, if any, of rumors as a source of knowledge. It does so by comparing rumor with two forms of epistemic dependence that have recently received attention in the philosophical literature: our dependence on the testimony of (...) others, and our dependence on what has been called the ‘coverage-reliability’ of our social environment (Goldberg 2010). According to the latter, an environment is ‘coverage-reliable’ if, across a wide range of beliefs and given certain conditions, it supports the following conditional: If ~p were true I would have heard about it by now. However, in information-deprived social environments with little coverage-reliability, rumors may transmit information that could not otherwise be had. This suggests that a trade-off exists between levels of trust in the coverage-reliability of official sources and (warranted) trust in rumor as a source of information. (shrink)
Standard characterizations of virtue epistemology divide the field into two camps: virtue reliabilism and virtue responsibilism. Virtue reliabilists think of intellectual virtues as reliable cognitive faculties or abilities, while virtue responsibilists conceive of them as good intellectual character traits. I argue that responsibilist character virtues sometimes satisfy the conditions of a reliabilist conception of intellectual virtue, and that consequently virtue reliabilists, and reliabilists in general, must pay closer attention to matters of intellectual character. This leads to several new questions and (...) (...) challenges for any reliabilist epistemology. (shrink)
A tempting argument for human rationality goes like this: it is more conducive to survival to have true beliefs than false beliefs, so it is more conducive to survival to use reliable belief-forming strategies than unreliable ones. But reliable strategies are rational strategies, so there is a selective advantage to using rational strategies. Since we have evolved, we must use rational strategies. In this paper I argue that some criticisms of this argument offered by Stephen Stich fail because they rely (...) on unsubstantiated interpretations of some results from experimental psychology. I raise two objections to the argument: (i) even if it is advantageous to use rational strategies, it does not follow that we actually use them; and (ii) natural selection need not favor only or even primarily reliable belief-forming strategies. (shrink)
In das paper 1 ccmstder the rehabday condaton in Atm PlanungaS's proper functionabst account of eptstemtc warrant I begm by reviewing m some detail the features of the rehabdity condition as Planunga lias aruculated a From there, 1 consider what is needed to ground or secure the sort of rehability whzch Plantinga has m mind, and argue that what is needed is a significant causai condam which has generally been overlooked Then, after identifying eight verstons of the relevant sort of (...) reltabdity, I exam me each alternative as to whether as requirement, along with PlanungaSs other proposed conditions, would give us a sausfactory account of epis tenuc warrant I conclude that there is bale to no hope of formulatmg a rehabilay condaion that would yield a sattsfactory analysts of the sort Plantinga destres. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
A recent study of moral intuitions, performed by Joshua Greene and a group of researchers at Princeton University, has recently received a lot of attention. Greene and his collaborators designed a set of experiments in which subjects were undergoing brain scanning as they were asked to respond to various practical dilemmas. They found that contemplation of some of these cases (cases where the subjects had to imagine that they must use some direct form of violence) elicited greater activity in certain (...) areas of the brain associated with emotions compared with the other cases. It has been argued (e.g., by Peter Singer) that these results undermine the reliability of our moral intuitions, and therefore provide an objection to methods of moral reasoning that presuppose that they carry an evidential weight (such as the idea of reflective equilibrium). I distinguish between two ways in which Greene's findings lend support for a sceptical attitude towards intuitions. I argue that, given the first version of the challenge, the method of reflective equilibrium can easily accommodate the findings. As for the second version of the challenge, I argue that it does not so much pose a threat specifically to the method of reflective equilibrium but to the idea that moral claims can be justified through rational argumentation in general. (shrink)
We think of logic as objective. We also think that we are reliable about logic. These views jointly generate a puzzle: How is it that we are reliable about logic? How is it that our logical beliefs match an objective domain of logical fact? This is an instance of a more general challenge to explain our reliability about a priori domains. In this paper, I argue that the nature of this challenge has not been properly understood. I explicate the (...) challenge both in general and for the particular case of logic. I also argue that two seemingly attractive responses – appealing to a faculty of rational insight or to the nature of concept possession – are incapable of answering the challenge. (shrink)
This paper explores what constitutes reliability in persons, particularly intellectual reliability. It considers global reliability , the overall reliability of persons, encompassing both the theoretical and practical realms; sectorial reliability , that of a person in a subject-matter (or behavioral) domain; and focal reliability , that of a particular element, such as a belief. The paper compares reliability with predictability of the kind most akin to it and distinguishes reliability as an intellectual (...) virtue from reliability as an intellectual power. The paper also connects reliability with insight, reasoning, knowledge, and trust. It is argued that insofar as reliability is an intellectual virtue, it must meet both external standards of correctitude and internal standards of justification. (shrink)
Some twenty years ago, Bogen and Woodward challenged one of the fundamental assumptions of the received view, namely the theory-observation dichotomy and argued for the introduction of the further category of scientific phenomena. The latter, Bogen and Woodward stressed, are usually unobservable and inferred from what is indeed observable, namely scientific data. Crucially, Bogen and Woodward claimed that theories predict and explain phenomena, but not data. But then, of course, the thesis of theory-ladenness, which has it that our observations are (...) influenced by the theories we hold, cannot apply. On the basis of two case studies, I want to show that this consequence of Bogen and Woodward’s account is rather unrealistic. More importantly, I also object against Bogen and Woodward’s view that the reliability of data, which constitutes the precondition for data-to-phenomena inferences, can be secured without the theory one seeks to test. The case studies I revisit have figured heavily in the publications of Bogen and Woodward and others: the discovery of weak neutral currents and the discovery of the zebra pattern of magnetic anomalies. I show that, in the latter case, data can be ignored if they appear to be irrelevant from a particular theoretical perspective (TLI) and that, in the former case, the tested theory can be critical for the assessment of the reliability of the data (TLA). I argue that both TLI and TLA are much stronger senses of theory-ladenness than the classical thesis and that neither TLI nor TLA can be accommodated within Bogen and Woodward’s account. (shrink)
There is surprising evidence that introspection of our phenomenal states varies greatly between individuals and within the same individual over time. This puts pressure on the notion that introspection gives reliable access to our own phenomenology: introspective unreliability would explain the variability, while assuming that the underlying phenomenology is stable. I appeal to a body of neurocomputational, Bayesian theory and neuroimaging findings to provide an alternative explanation of the evidence: though some limited testing conditions can cause introspection to be unreliable, (...) mostly it is our phenomenology itself that is variable. With this account of phenomenal variability, the occurrence of the surprising evidence can be explained while generally retaining introspective reliability. (shrink)
This paper concerns various competing views on the nature of perceptual justification. Various thought experiments that motivate various views are discussed. Once reliabilism is rejected and some form of internalism is instead embraced, the following issue arises: must an internalist nevertheless require that perceptual justification involve the possession of evidence for the reliability of our perceptual processes? Matthias Steup answers in the affirmative, espousing what he calls internalist reliabilism. Some problems are raised for this form of internalism.
We are reliable about logic in the sense that we by-and-large believe logical truths and disbelieve logical falsehoods. Given that logic is an objective subject matter, it is difficult to provide a satisfying explanation of our reliability. This generates a significant epistemological challenge, analogous to the well-known Benacerraf-Field problem for mathematical Platonism. One initially plausible way to answer the challenge is to appeal to evolution by natural selection (or to a related mechanism). The central idea is that being able (...) to correctly deductively reason conferred a heritable survival advantage upon our ancestors. However, there are several arguments that purport to show that evolutionary accounts cannot even in principle explain how it is that we are reliable about logic. In this paper, I address these arguments. I show that there is no general reason to think that evolutionary accounts are incapable of explaining our reliability about logic. (shrink)
Reliabilism has come under recent attack for its alleged inability to account for the value we typically ascribe to knowledge. It is charged that a reliably-produced true belief has no more value than does the true belief alone. I reply to these charges on behalf of reliabilism; not because I think reliabilism is the correct theory of knowledge, but rather because being reliably-produced does add value of a sort to true beliefs. The added value stems from the fact that a (...) reliably-held belief is non-accidental in a particular way. While it is widely acknowledged that accidentally true beliefs cannot count as knowledge, it is rarely questioned why this should be so. An answer to this question emerges from the discussion of the value of reliability; an answer that holds interesting implications for the value and nature of knowledge. (shrink)
Some time ago, F. P. Ramsey (1960) suggested that knowledge is true belief obtained by a reliable process. This suggestion has only recently begun to attract serious attention. In 'Discrimination and Perceptual Knowledge', Alvin Goldman (1976) argues that a person has knowl- edge only if that person's belief has been formed as a result of a reliable cognitive mechanism. In Belief, Truth, and Knowledge, David Arm- strong (1973) argues that one has knowledge only if one's belief is a comPletely reliable (...) sign of the truth of the proposition believed. On both of these theories, the reliability of one's belief is a necessary condition of that belief's being an instance of knowledge. These reliability theories have another interesting feature in common, namely, that neither of them explicitly requires or includes the traditional justification requirement for knowledge. Reliability has taken over the role of justification. This naturally leads to the question whether reliability and justification are related in some philosophically interes- ting fashion. In this paper I shall investigate this question. The result will be a positive proposal to the effect that justified belief is reliable belief. This result, in turn, explains why reliability can take over the role of justification in an account of knowledge. Moreover, the identification of justification with reliability constitutes a step toward the naturalization of normative epistemological concepts. (shrink)
Critics of reliability theories of epistemic justificationoften claim that the `generality problem' is an insurmountabledifficulty for such theories. The generality problem is theproblem of specifying the level of generality at which abelief-forming process is to be described for the purposeof assessing its reliability. This problem is not asintractable as it seems. There are illuminating solutionsto analogous problems in the ethics literature. Reliabilistsought to attend to utilitarian approaches to choices betweeninfinite utility streams; they also ought to attend towelfarist approaches (...) to social choice situations that donot demand full aggregation of individual welfares.These analogies suggest that the traditional `single number'approach to reliability is misguided. I argue that a newapproach – the `vector reliability' approach – is preferable.Vector reliability theories associate target beliefs withreliability vectors – that is, structured collections ofreliability numbers – and construct criteria of epistemicjustification that appeal to these vectors. The bulk of thetheoretical labor involved in a reliability account of epistemicjustification is thus transferred from picking a uniquereliability number to constructing a plausible criterionof epistemic justification. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
In computer simulations of physical systems, the construction of models is guided, but not determined, by theory. At the same time simulations models are often constructed precisely because data are sparse. They are meant to replace experiments and observations as sources of data about the world; hence they cannot be evaluated simply by being compared to the world. So what can be the source of credibility for simulation models? I argue that the credibility of a simulation model comes not only (...) from the credentials supplied to it by the governing theory, but also from the antecedently established credentials of the model building techniques employed by the simulationists. In other words, there are certain sorts of model building techniques which are taken, in and of themselves, to be reliable. Some of these model building techniques, moreover, incorporate what are sometimes called “falsifications.” These are contrary-to-fact principles that are included in a simulation model and whose inclusion is taken to increase the reliability of the results. The example of a falsification that I consider, called artificial viscosity, is in widespread use in computational fluid dynamics. Artificial viscosity, I argue, is a principle that is successfully and reliably used across a wide domain of fluid dynamical applications, but it does not offer even an approximately “realistic” or true account of fluids. Artificial viscosity, therefore, is a counter-example to the principle that success implies truth – a principle at the foundation of scientific realism. It is an example of reliability without truth. (shrink)
Many solutions of the Goodman paradox have been proposed but so far no agreement has been reached about which is the correct solution. However, I will not contribute here to the discussion with a new solution. Rather, I will argue that a solution has been in front of us for more than two hundred years because a careful reading of Hume’s account of inductive inferences shows that, contrary to Goodman’s opinion, it embodies a correct solution of the paradox. Moreover, the (...) account even includes a correct answer to Mill’s question of why in some cases a single instance is sufficient for a complete induction, since Hume gives a well-supported explanation of this reliability phenomenon. The discussion also suggests that Bayesian theory by itself cannot explain this phenomenon. Finally, we will see that Hume’s explanation of the reliability phenomenon is surprisingly similar to the explanation given lately by a number of naturalistic philosophers in their discussion of the Goodman paradox. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) among testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in scenarios of equivalent testimonies (Olsson and Schubert 2007; Schubert, to appear). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general (...) scenario where the testimonies do not have to be equivalent. It is shown that four measures are reliability conducive in the present scenario, all of which are ordinally equivalent to the Shogenji measure. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence. (shrink)
In this contribution, we identify and clarifysome distinctions we believe are useful inestablishing the reliability of information onthe Internet. We begin by examining some of thesalient features of information that go intothe determination of reliability. In so doing,we argue that we need to distinguish contentand pedigree criteria of reliability and thatwe need to separate issues of reliability ofinformation from the issues of theaccessibility and the usability of information.We then turn to an analysis of some commonfailures to (...) recognize reliability orunreliability. (shrink)
It is a widely shared view among philosophers of science that the theory-dependence (or theory-ladenness) of observations is worrying, because it can bias empirical tests in favour of the tested theories. These doubts are taken to be dispelled if an observation is influenced by a theory independent of the tested theory and thus circularity is avoided, while (partially) circular tests are taken to require special attention. Contrary to this consensus, it is argued that the epistemic value of theory-dependent tests has (...) nothing to do with the circularity or non-circularity of the test, but is instead based on the minimal empiricality and reliability of observations. Since theory-dependence does not in general prevent observations fulfilling these requirements, it should not be regarded as a phenomenon that is basically detrimental, but as neutral with respect to successful scientific knowledge gathering. (shrink)
Error and Inference discusses Deborah Mayo’s theory that connects the reliability of science to scientific evidence. She sees it as an essential supplement to the negative principles of critical rationalism. She and Aris Spanos, her co-editor, declare that the discussions in the book amount to tremendous progress. Yet most contributors to the book misconstrue the Socratic character of critical rationalism because they ignore a principal tenet: criticism in and of itself comprises progress, and empirical refutation comprises learning from experience. (...) Critical rationalism should be recommended in the critical spirit, not as dogma. (shrink)
The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as (...) the primary criterion for ‘best', the mathematical approximation perspective undermines the reliability of inference objective by giving rise to selection rules which pay insufficient attention to ‘accounting for the regularities in the data'. A more appropriate framework is offered by the error-statistical approach, where (i) statistical adequacy provides the criterion for assessing when a curve captures the regularities in the data adequately, and (ii) the relevant error probabilities can be used to assess the reliability of inductive inference. Broadly speaking, the fittest curve (statistically adequate) is not determined by the smallness if its residuals, tempered by simplicity or other pragmatic criteria, but by the nonsystematic (e.g. white noise) nature of its residuals. The advocated error-statistical arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds. ‡I am grateful to Deborah Mayo and Clark Glymour for many valuable suggestions and comments on an earlier draft of the paper; estimating the Ptolemaic model was the result of Glymour's prompting and encouragement. †To contact the author, please write to: Department of Economics, Virginia Tech 3019 Pamplin Hall (0316), Blacksburg, VA 24061; e-mail: email@example.com. (shrink)
A measure of coherence is said to be truth conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood of truth. Recent impossibility results strongly indicate that there are no (non-trivial) probabilistic coherence measures that are truth conducive. Indeed, this holds even if truth conduciveness is understood in a weak ceteris paribus sense (Bovens & Hartmann, 2003, Bayesian epistemology. New York, Oxford: Oxford University Press; Olsson, 2005, Against coherence: Truth probability and justification. Oxford: (...) Oxford University Press). This raises the problem of how coherence could nonetheless be an epistemically important property. Our proposal is that coherence may be linked in a certain way to reliability. We define a measure of coherence to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher probability that the information sources are reliable. Restricting ourselves to the most basic case, we investigate which coherence measures in the literature are reliability conducive. It turns out that, while a number of measures fail to be reliability conducive, except possibly in a trivial and uninteresting sense, Shogenji’s measure and several measures generated by Douven and Meijs’s recipe are notable exceptions to this rule. (shrink)
Scientific measurements are made objective through the use of reliable instruments. Instruments can have this function because they can - as material objects - be investigated independently of the specific measurements at hand. However, their materiality appears to be crucial for the assessment of their reliability. The usual strategies to investigate an instrument’s reliability depend on and assume possibilities of control, and control is usually specified in terms of materiality of the instrument and environment. The aim of this (...) paper is to investigate the problem of reliability for non-material instruments, the instruments being applied in the social sciences. Any possible lack of reliability of the instrument hinders the measurements of ever becoming objective. (shrink)
Belief revision theory concerns methods for reformulating an agent's epistemic state when the agent's beliefs are refuted by new information. The usual guiding principle in the design of such methods is to preserve as much of the agent's epistemic state as possible when the state is revised. Learning theoretic research focuses, instead, on a learning method's reliability or ability to converge to true, informative beliefs over a wide range of possible environments. This paper bridges the two perspectives by assessing (...) the reliability of several proposed belief revision operators. Stringent conceptions of minimal change are shown to occasion a limitation called inductive amnesia: they can predict the future only if they cannot remember the past. Avoidance of inductive amnesia can therefore function as a plausible and hitherto unrecognized constraint on the design of belief revision operators. (shrink)
The likelihood principle of Bayesian statistics implies that information about the stopping rule used to collect evidence does not enter into the statistical analysis. This consequence confers an apparent advantage on Bayesian statistics over frequentist statistics. In the present paper, I argue that information about the stopping rule is nevertheless of value for an assessment of the reliability of the experiment, which is a pre-experimental measure of how well a contemplated procedure is expected to discriminate between hypotheses. I show (...) that, when reliability assessments enter into inquiries, some stopping rules prescribing optional stopping are unacceptable to both Bayesians and frequentists. (shrink)
We develop a probabilistic criterion for belief expansion that is sensitive to the degree of contextual fit of the new information to our belief set as well as to the reliability of our information source. We contrast our approach with the success postulate in AGM-style belief revision and show how the idealizations in our approach can be relaxed by invoking Bayesian-Network models.
The partner choice approach to understanding the evolution of cooperation builds on approaches that focus on partner control by considering processes that occur prior to pair or group formation. Proponents of the partner choice approach rightly note that competition to be chosen as a partner can help solve the puzzle of cooperation. I aim to build on the partner choice approach by considering the role of signalling in partner choice. Partnership formation often requires reliable information. Signalling is thus important in (...) the context of partner choice. However, the issue of signal reliability has been understudied in the partner choice literature. The issue deserves attention because – despite what proponents of the partner choice approach sometimes claim – that approach does face a cheater problem, which we might call the problem of false advertising in biological markets. Both theoretical and empirical work is needed to address this problem. I will draw on signalling theory to provide a theoretical framework within which to organise the scattered discussions of the false advertising problem extant in the partner choice literature. I will end by discussing some empirical work on cooperation, partner choice, and punishment among humans. (shrink)
Information generally comes from less than fully reliable sources. Rationality, it seems, requires that one take source reliability into account when reasoning on the basis of such information. Recently, Bovens and Hartmann (2003) proposed an account of the conjunction fallacy based on this idea. They show that, when statements in conjunction fallacy scenarios are perceived as coming from such sources, probability theory prescribes that the “fallacy” be committed in certain situations. Here, the empirical validity of their model was assessed. (...) The model predicts that statements added to standard conjunction problems will change the incidence of the fallacy. It also predicts that statements from reliable sources should yield an increase in fallacy rates (relative to unreliable sources). Neither the former (Experiment 1) nor the latter prediction (Experiment 3) was confirmed, although Experiment 2 showed that people can derive source reliability estimates from the likelihood of statements in a manner consistent with the tested model. In line with the experimental results, model fits and sensitivity analyses also provided very little evidence in favor of the model. This suggests that Bovens and Hartmann’s present model fails to explain fully people’s judgements in standard conjunction fallacy tasks. (shrink)
The concept of transliminality (''a hypothesized tendency for psychological material to cross thresholds into or out of consciousness'') was anticipated by William James (1902/1982), but it was only recently given an empirical definition by Thalbourne in terms of a 29-item Transliminality Scale. This article presents the 17-item Revised Transliminality Scale (or RTS) that corrects age and gender biases, is unidimensional by a Rasch criterion, and has a reliability of .82. The scale defines a probabilistic hierarchy of items that address (...) magical ideation, mystical experience, absorption, hyperaesthesia, manic experience, dream interpretation, and fantasy proneness. These findings validate the suggestions by James and Thalbourne that some mental phenomena share a common underlying dimension with selected sensory experiences (such being overwhelmed by smells, bright lights, sights, and sounds). Low scores on transliminality remain correlated with ''tough mindedness'' in on Cattell 16PF test, as well as ''self-control'' and ''rule consciousness,'' whereas high scores are associated with ''abstractedness'' and an ''openness to change'' on that test. An independent validation study confirmed the predictions implied by our definition of transliminality. Implications for test construction are discussed. (shrink)
In this paper I argue three things: (1) that the interactionist view underlying Benacerraf's (1973) challenge to mathematical beliefs renders inexplicable the reliability of most of our beliefs in physics; (2) that examples from mathematical physics suggest that we should view reliability differently; and (3) that abstract mathematical considerations are indispensable to explanations of the reliability of our beliefs.
The documented low levels of reliability of the peer review process present a serious challenge to editors who must often base their publication decisions on conflicting referee recommendations. The purpose of this article is to discuss this process and examine ways to produce a more reliable and useful peer review system.
Recent epistemology divides theories of knowledge according to their diagnoses of cases of failed knowledge, Gettier cases. Two rival camps have emerged: naturalism and justificationism. Naturalism attributes the failure of knowledge in these cases to the cognizer's failure to stand in a strong natural position vis-à-vis the proposition believed. Justificationism traces the failure to the cognizer's failure to be strongly justified in his belief. My aim is to reconcile these camps by offering a version of naturalism, a reliability theory (...) of knowledge, that conforms to the central justificationist tenets. I argue that proposed reliability theories of knowledge, reliable indication theories, offer no prospect of a reconciliation because they misdiagnose failed knowledge in such a way as to violate a basic justificationist tenet. Proposed versions of justificationism, it turns out, fare no better with this tenet. I offer an alternative reliability theory of knowledge, a reliable process theory, that conforms to the justificationist tenet. (shrink)
Recently, certain philosophers of mathematics (Fallis ; Womack and Farach [(1997]) have argued that there are no epistemic considerations that should stop mathematicians from using probabilistic methods to establish that mathematical propositions are true. However, mathematicians clearly should not use methods that are unreliable. Unfortunately, due to the fact that randomized algorithms are not really random in practice, there is reason to doubt their reliability. In this paper, I analyze the prospects for establishing that randomized algorithms are reliable. I (...) end by arguing that it would be inconsistent for mathematicians to suspend judgement on the truth of mathematical propositions on the basis of worries about the reliability of randomized algorithms. (shrink)
The coefficients of internal consistency and retest reliability had been rarely investigated within the methodology of dream content analysis. Analyzing a dream series of elderly, healthy persons obtained from weekly telephone interviews, the internal consistency of a series of 20 dreams and retests after 4 or 22 weeks, respectively, had been computed. The findings indicate that dream recall and dream length are quite stable, but dream characteristics such as bizarreness and emotional tone underlie large intraindividual fluctuations. In order to (...) obtain reliable measures for these variables which will be important for correlational studies, including waking-life trait measures, one has to obtain as many dreams as possible (about 20) in a very short time period. Further research is needed to extend the present findings to diary dreams and laboratory dreams. (shrink)
Experimental data are often acclaimed on the grounds that they can be consistently generated. They are, it is said, reproducible. In this paper I describe how this feature of experimental-data (their pragmatic reliability) leads to their epistemic worth (their epistemic reliability). An important part of my description is the supposition that experimental procedures are to certain extent fixed and stable. Various illustrations from the actual practice of science are introduced, the most important coming at the end of the (...) paper with a discussion of Ray Davis' 1967 solar-neutrino detection experiment (as it is portrayed in Pinch, 1980). (shrink)
The paper provisionally accepts the goal of Goldman's primary epistemics, which is to seek reliability values for basic cognitive processes, and questions whether such values may plausibly be expected. The reliability of such processes as perception and memory is dependent on other aspects of cognitive structure, and especially on one's "conceptual scheme," the evaluation of which goes beyond primary epistemics (and its dependence on cognitive science) to social epistemics, or indeed to traditional epistemology and philosophy of science. Two (...) general arguments against the plausibility of determining reliability values for the basic cognitive architecture of humans are proposed, one applying Fodor's distinction between input and central systems, and the other invoking a point by Geertz about culture and evolution. Social epistemics is only briefly evaluated, as it is nascent. (shrink)
Despite growing interest in emotion regulation, the degree to which psychophysiological measures of emotion regulation are stable over time remains unknown. We examined four-week test-retest reliability of corrugator electromyographic and eyeblink startle measures of negative emotion and its regulation. Both measures demonstrated similar sensitivity to the emotion manipulation, but only individual differences in corrugator modulation and regulation showed adequate reliability. Startle demonstrated diminished sensitivity to the regulation instructions across assessments and poor reliability. This suggests that corrugator (...) represents a trait-like measure of voluntary emotion regulation, whereas startle should be used with caution for assessing individual differences. The data also suggest that corrugator and startle might index partially dissociable constructs and underscore the need to collect multiple measures of emotion. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (asmeasured) of a set of testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that the Shogenji measure of coherence is reliability conducive in restricted scenarios (e.g., Olsson and Schubert, Synthese, 157:297–308, 2007). In this article, I investigate whether the Shogenji measure, or any other coherence measure, is reliability conducive in general. An (...) impossibility theorem is proved to the effect that this is not the case. I conclude that coherence is not reliability conducive. (shrink)
Information about the environment is captured in human biological systems on a variety of interacting levels – in distributions of genes, linguistic particulars, concepts, methods, theories, preferences, and overt behaviors. I investigate some of the basic principles which govern such a hierarchy by constructing a comparatively simple three-level selection model of bee foraging preferences and behaviors. The information-theoretic notion of ''''mutual information'''' is employed as a measure of efficiency in tracking a changing environment, and its appropriateness in epistemological applications is (...) discussed at some length. In particular, information accumulated in mid-level preference distributions exhibits suggestive properties for the purposes of naturalistic epistemology. It also appears that the novelty of scientific objects and representations and the rapidity of scientific change relative to genetic change need present no obstacle to the use of such models in explanations of scientific progress and the reliability of scientific judgement. (shrink)
This article investigates whether investors consider the reliability of companies’ sustainability information when determining the companies’ market value. Specifically, we examine market reactions (in terms of abnormal returns) to events that increase the reliability of companies’ sustainability information but do not provide markets with additional sustainability information. Controlling for competing effects, we regard companies’ additions to an internationally important sustainability index as such events and consider possible determinants for market reactions. Our results suggest that first, investors take into (...) account the reliability of sustainability information when determining the market value of a company and second, the benefits of increased reliability of sustainability information vary cross-sectionally. More specifically, companies that carry higher risks for investors (e.g., higher systematic investment risk, higher financial leverage, and higher levels of opportunistic management behavior) react more strongly to an increase in the reliability of sustainability information. Finally, we show that the benefits of an increase in the reliability of sustainability information are higher in times of economic uncertainty (e.g., during economic downturns and generally high stock price volatilities). (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in a restricted scenario (Olsson and Schubert 2007, Synthese 157:297–308). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general scenario where it (...) is any finite number of witnesses that give equivalent reports. It is shown that only the so-called Shogenji measure is reliability conducive in this scenario. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence. (shrink)
This paper argues that the concept of reliability provides a useful framework for analyzing defects in organizational design and for prescribing changes that will facilitate ethical decision making. Reliability becomes an ethical concern when the individual or organizational interest diverges from the collective interest. Redundancy and requisite variety provide two design tools which can enable organizations to act reliably in the collective interest. The paper then discusses potential disadvantages to the use of a reliability framework as well (...) as possible problems of implementation. It concludes by examining avenues for future research. (shrink)
An analysis is presented of published methods that have been used by experimenters to justify the reliability of the theory of invasion of microorganisms into cultured cells. The results show that, to demonstrate this invasion, many experimenters used two or more methods that were based on independent technical and theoretical principles, and by doing so improved the reliability of the theory. Subsequently I compare this strategy of 'multiple derivability' with other strategies, (...) class='Hi'> discussed in the literature in relation to the mesosome, a bacterial organelle that had been detected with the electron microsope, but which appeared later to be an artifact. I propose that different strategies have been applied in this problem, and multiple derivability may have been the decisive one. Finally I discuss the idea that multiple derivability may help to anchor theories in a larger network of theories. (shrink)
Instead of arguing about whether moral judgments are based on emotion or reason, moral psychologists should investigate the reliability of moral judgments by checking rates of framing effects in different kinds of moral judgments under different conditions by different people.
Are we entitled or justified in taking the word of others at face value? An affirmative answer to this question is associated with the views of Thomas Reid. Recently, C. A. J. Coady has defended a Reidian view in his impressive and influential book, Testimony: A Philosophical Study. His central and most original argument for his positions involves reflection upon the practice of giving and accepting reports, of making assertions and relying on the word of others. His argument purports to (...) show that testimony is, by its very nature, a "reliable form of evidence about the way the world is." The argument moves from what we do to why we are justified in doing it. Although I am sympathetic with both the Reidian view and Coady's attempt to connect why we rely on others with why we are entitled to rely on others, I find Coady's argument ineffective. (shrink)
Recent debate in metaethics over evolutionary debunking arguments against morality has shown a tendency to abstract away from relevant empirical detail. Here, I engage the debate about Darwinian debunking of morality with relevant empirical issues. I present four conditions that must be met in order for it to be reasonable to expect an evolved cognitive faculty to be reliable: the environment, information, error, and tracking conditions. I then argue that these conditions are not met in the case of our evolved (...) faculty for moral judgement. (shrink)
In section III of Pryor 2006a, I argued against the view that the mere fact that a thought- type is hyper-reliable directly gives one justification to believe a thought of that type. A close alternative says that our merely appreciating that the thought-type is hyper-reliable directly gives us that justification.
The variety of evidence thesis in confirmation theory states that more varied supporting evidence confirms a hypothesis to a greater degree than less varied evidence. Under a very plausible interpretation of this thesis, positive test results from multiple independent instruments confirm a hypothesis to a greater degree than positive test results from a single instrument. We invoke Bayesian Networks to model confirmation on grounds of evidence that is obtained from less than fully reliable instruments and show that the variety of (...) evidence thesis is not sacrosanct when testing is conducted with less than fully reliable instruments: under certain conditions, a hypothesis receives more confirmation from evidence that is obtained from one rather than from more independent instruments. In the appendix, we prove certain convergence results for large numbers of positive test results from single versus multiple less than fully reliable instruments. (shrink)
In the Philosophical Investigations, Wittgenstein contrues psychological facts as patterns exhibited by `weaves' which include a person's behaviour as well as her temporal and social surroundings. Avowals, in being linguistic elements of such patterns, come to be taken as expressing psychological facts in a way that given the general liberty in pattern description, is normal with all conspicuous elements of behavioural patterns. Speakers come to be taken to express psychological facts because avowals are semantically self-predicating (which is understandable in the (...) light of the normal ways they are learnt). That avowals come to be reliable expressions of their psychological facts is anything but surprising, given normal human capacities of learning to behave in patterns; furthermore, avowals can supplement incomplete patterns and thus define them because articulated sentences add high amounts of complexity. Though not intro-evidentially descriptive, avowals can be descriptions in the way that stating one's impressions of x can be a description of x. (shrink)
Internalists have criticised reliabilism for overlooking the importance of the subject's point of view in the generation of knowledge. This paper argues that there is a troubling ambiguity in the intuitive examples that internalists have used to make their case, and on either way of resolving this ambiguity, reliabilism is untouched. However, the argument used to defend reliabilism against the internalist cases could also be used to defend a more radical form of externalism in epistemology.
Necessity holds that, if a proposition A supports another B, then it must support B. John Greco contends that one can resolve Hume's Problem of Induction only if she rejects Necessity in favor of reliabilism. If Greco's contention is correct, we would have good reason to reject Necessity and endorse reliabilism about inferential justification. Unfortunately, Greco's contention is mistaken. I argue that there is a plausible reply to Hume's Problem that both endorses Necessity and is at least as good as (...) Greco's alternative. Hence, Greco provides a good reason for neither rejecting Necessity nor endorsing inferential reliabilism. (shrink)
There is an ancient, yet still lively, debate in moral epistemology about the epistemic significance of disagreement. One of the important questions in that debate is whether, and to what extent, the prevalence and persistence of disagreement between our moral intuitions causes problems for those who seek to rely on intuitions in order to make moral decisions, issue moral judgments, and craft moral theories. Meanwhile, in general epistemology, there is a relatively young, and very lively, debate about the epistemic significance (...) of disagreement. A central question in that debate concerns peer disagreement: When I am confronted with an epistemic peer with whom I disagree, how should my confidence in my beliefs change (if at all)? The disagreement debate in moral epistemology has not been brought into much contact with the disagreement debate in general epistemology (though McGrath  is an important exception). A purpose of this paper is to increase the area of contact between these two debates. In Section 1, I try to clarify the question I want to ask in this paper – this is the question whether we have any reasons to believe what I shall call “anti-intuitivism.” In Section 2, I argue that anti-intuitivism cannot be supported solely by investigating the mechanisms that produce our intuitions. In Section 3, I discuss an anti-intuitivist argument from disagreement which relies on the so-called “Equal Weight View.” In Section 4, I pause to clarify the notion of epistemic parity and to explain how it ought to be understood in the epistemology of moral intuition. In Section 5, I return to the anti-intuitivist argument from disagreement and explain how an apparently-vulnerable premise of that argument may be quite resilient. In Section 6, I introduce a novel objection against the Equal Weight View in order to show how I think we can successfully resist the anti-intuitivist argument from disagreement. (shrink)
Goldman, though still a reliabilist, has made some recent concessions to evidentialist epistemologies. I agree that reliabilism is most plausible when it incorporates certain evidentialist elements, but I try to minimize the evidentialist component. I argue that fewer beliefs require evidence than Goldman thinks, that Goldman should construe evidential fit in process reliabilist terms, rather than the way he does, and that this process reliabilist understanding of evidence illuminates such important epistemological concepts as propositional justification, ex ante justification, and defeat.
This paper discusses there is no sustainable theoretical alternative for building knowledge without principles including cooperation –aimed at the preparation and distribution of beliefs– among individuals. This principle helps to conceive both the relation among internalist and externalist theories, and a cognitive explanation based on the concept of epistemic warrant. The concluding remark is that concepts, like evidence or reliability, can only be conceived as skills of subjects belonging to a community.
This paper explores how data serve as evidence for phenomena. In contrast to standard philosophical models which invite us to think of evidential relationships as logical relationships, I argue that evidential relationships in the context of data-to-phenomena reasoning are empirical relationships that depend on holding the right sort of pattern of counterfactual dependence between the data and the conclusions investigators reach on the phenomena themselves.
Hans Reichenbach is well known for his limiting frequency view of probability, with his most thorough account given in The Theory of Probability in 1935/1949. Perhaps less known are Reichenbach’s early views on probability and its epistemology. In his doctoral thesis from 1915, Reichenbach espouses a Kantian view of probability, where the convergence limit of an empirical frequency distribution is guaranteed to exist thanks to the synthetic a priori principle of lawful distribution. Reichenbach claims to have given a purely objective (...) account of probability, while integrating the concept into a more general philosophical and epistemological framework. A brief synopsis of Reichenbach’s thesis and a critical analysis of the problematic steps of his argument will show that the roots of many of his most influential insights on probability and causality can be found in this early work. (shrink)
This paper assesses the comparative reliability of two belief-revision rules relevant to the epistemology of disagreement, the Equal Weight and Stay the Course rules. I use two measures of reliability for probabilistic belief-revision rules, calibration and Brier Scoring, to give a precise account of epistemic peerhood and epistemic reliability. On the calibration measure of reliability, epistemic peerhood is easy to come by, and employing the Equal Weight rule generally renders you less reliable than Staying the Course. (...) On the Brier-Score measure of reliability, epistemic peerhood is much more difficult to come by, but employing the Equal Weight rule always renders you more reliable than Staying the Course. I conclude with some normative lessons we can draw from these formal results. (shrink)
Epistemic trust is crucial for science. This article aims to identify the kinds of assumptions that are involved in epistemic trust as it is required for the successful operation of science as a collective epistemic enterprise. The relevant kind of reliance should involve working from the assumption that the epistemic endeavors of others are appropriately geared towards the truth, but the exact content of this assumption is more difficult to analyze than it might appear. The root of the problem is (...) that methodological decisions in science typically involve a complex trade-off between the reliability of positive results, the reliability of negative results, and the investigation's power (the rate at which it delivers definitive results). Which balance between these is the ‘correct’ one can only be determined in light of an evaluation of the consequences of all the different possible outcomes of the inquiry. What it means for the investigation to be ‘appropriately geared towards the truth’ thus depends on certain value judgments. I conclude that in the optimal case, trusting someone in her capacity as an information provider also involves a reliance on her having the right attitude towards the possible consequences of her epistemic work. 1 Introduction2 Epistemic Reliance within the Sciences3 Methodological Conventionalism4 Trust in Science5 Conclusions. (shrink)
In “Process Reliabilism and the Value Problem” I argue that Erik Olsson and Alvin Goldman's conditional probability solution to the value problem in epistemology is unsuccessful and that it makes significant internalist concessions. In “Kinds of Learning and the Likelihood of Future True Beliefs” Olsson and Martin Jönsson try to show that my argument does “not in the end reduce the plausibility” of Olsson and Goldman's account. Here I argue that, while Olsson and Jönsson clarify and amend the conditional probability (...) approach in a number of helpful ways, my case against it remains intact. I conclude with a constructive proposal as to how their account may be steered in a more promising direction. (shrink)
I argue that beliefs that are true whenever held -- like I exist, I am thinking about myself, and (in an object-dependent framework) Jack = Jack -- needn’t on that account be a priori. It does however seem possible to remove the existential commitment from the last example, to get a belief that is knowable a priori. I discuss some difficulties concerning how to do that.
Deborah Mayo's view of science is that learning occurs by severely testing specific hypotheses. Mayo expounded this thesis in her (1996) Error and the Growth of Experimental Knowledge (EGEK). This volume consists of a series of exchanges between Mayo and distinguished philosophers representing competing views of the philosophy of science. The tone of the exchanges is lively, edifying and enjoyable. Mayo's error-statistical philosophy of science is critiqued in the light of positions which place more emphasis on large-scale theories. The result (...) clarifies Mayo's account and highlights her contribution to the philosophy of science -- in particular, her contribution to the philosophy of those sciences that rely heavily on statistical analysis. The second half of the volume considers the application (or extension) of an error-statistical philosophy of science to theory testing in economics, causal modelling and legal epistemology. The volume also includes a contribution to the frequentist philosophy of statistics written by Mayo in collaboration with Sir David Cox. (shrink)