Conceivability is an important source of our beliefs about what is possible; inconceivability is an important source of our beliefs about what is impossible. What are the connections between the reliability of these sources? If one is reliable, does it follow that the other is also reliable? The central contention of this paper is that suitably qualified the reliability of inconceivability implies the reliability of conceivability, but the reliability of conceivability fails to imply the reliability (...) of inconceivability. (shrink)
This article investigates whether investors consider the reliability of companies’ sustainability information when determining the companies’ market value. Specifically, we examine market reactions (in terms of abnormal returns) to events that increase the reliability of companies’ sustainability information but do not provide markets with additional sustainability information. Controlling for competing effects, we regard companies’ additions to an internationally important sustainability index as such events and consider possible determinants for market reactions. Our results suggest that first, investors take into (...) account the reliability of sustainability information when determining the market value of a company and second, the benefits of increased reliability of sustainability information vary cross-sectionally. More specifically, companies that carry higher risks for investors (e.g., higher systematic investment risk, higher financial leverage, and higher levels of opportunistic management behavior) react more strongly to an increase in the reliability of sustainability information. Finally, we show that the benefits of an increase in the reliability of sustainability information are higher in times of economic uncertainty (e.g., during economic downturns and generally high stock price volatilities). (shrink)
Rumors, for better or worse, are an important element of public discourse. The present paper focuses on rumors as an epistemic phenomenon rather than as a social or political problem. In particular, it investigates the relation between the mode of transmission and the reliability, if any, of rumors as a source of knowledge. It does so by comparing rumor with two forms of epistemic dependence that have recently received attention in the philosophical literature: our dependence on the testimony of (...) others, and our dependence on what has been called the ‘coverage-reliability’ of our social environment (Goldberg 2010). According to the latter, an environment is ‘coverage-reliable’ if, across a wide range of beliefs and given certain conditions, it supports the following conditional: If ~p were true I would have heard about it by now. However, in information-deprived social environments with little coverage-reliability, rumors may transmit information that could not otherwise be had. This suggests that a trade-off exists between levels of trust in the coverage-reliability of official sources and (warranted) trust in rumor as a source of information. (shrink)
With respect to the confirmation of mathematical propositions, proof possesses an epistemological authority unmatched by other means of confirmation. This paper is an investigation into why this is the case. I make use of an analysis drawn from an early reliability perspective on knowledge to help make sense of mathematical proofs singular epistemological status.
Aims : The Modified Reasons for Smoking Scale (MRSS) is a widely accepted scale that measures psychological functions of smoking. The scale has been translated in Dutch and has been validated, in order to be used in clinical smoking cessation practice in the Dutch-speaking part of Belgium. This study examined the factorial structure, reliability and validity of the scale in a sample of smokers, who are characterized by a high level of dependence and an explicit motivation to stop smoking. (...) Method : The participants were 383 smokers, who volunteered at the stop-smoking clinic of a Belgian university hospital, and completed the translated scale. They were administered the translated MRSS, the Fagerstrom Test for Nicotine Dependence (FTND). Through a clinical interview, smoking behaviour and smoking history was assessed (daily smoking consumption, years smoking, number of quit attempts, weeks stopped, alcohol and coffee consumption, CO level). Exploratory factor analysis was performed. Internal consistency was studied in order to examine the reliability. The concurrent validity was assessed by means of manova, anova and correlation analysis. Results : Factor analysis identified four factors, named stimulation, pleasure of smoking, social smoking and automatism of smoking. Cronbach's alpha ranged from 0.65 (automatism) to 0.72 (stimulation). manova indicated the influence of the variables age, sex, daily consumption and the FTND (the latter two variables showed a dose-dependent association with each subscale). Regression analysis revealed a relationship with dependence indicators, namely: the daily consumption, the number and duration of previous quit attempts, FTND, CO level and daily coffee intake. Conclusions : The Dutch translation of the MRSS identified four factors and revealed acceptable validity and reliability. The adapted version of the translated scale as a component of the psychological assessment procedure in a smoking cessation treatment in Dutch-speaking areas should be implemented. (shrink)
Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a motor response that has already been initiated (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. In order to examine the reliability of this measure, we pooled data across three separate studies and examined the influence of multiple (...) SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings support the reliability of SSRT as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. (shrink)
In this article, I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology that has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, (...) but to show how the autonomy of those robotic systems, by which I mean the full automation of their decision processes, raises difficulties and also paradoxes that are not easy to solve. This is especially so when considering the autonomy of those robotic systems in their decision processes alongside their reliability. Finally, I would like to show how difficult it is to respond to these difficulties and paradoxes by calling into play a strong formulation of the precautionary principle. (shrink)
We consider the procedure for small-sample estimation of reliability parameters. The main shortcomings of the classical methods and the Bayesian approach are analyzed. Models that find robust Bayesian estimates are proposed. The sensitivity of the Bayesian estimates to the choice of the prior distribution functions is investigated using models that find upper and lower bounds. The proposed models reduce to optimization problems in the space of distribution functions.
The formal representation of the strength of witness testimony has been historically tied to a formula — proposed by Condorcet — that uses a factor representing the reliability of an individual witness. This approach encourages a false dilemma between hyper-scepticism about testimony, especially to extraordinary events such as miracles, and an overly sanguine estimate of reliability based on insufficiently detailed evidence. Because Condorcet’s formula does not have the resources for representing numerous epistemically relevant details in the unique situation (...) in which testimony is given, many late 19th century thinkers like Venn turned away from the probabilistic analysis of testimony altogether. But a more nuanced approach using Bayes factors provides a better, more flexible, formalism for representing the evidential force of testimony. (shrink)
Standard characterizations of virtue epistemology divide the field into two camps: virtue reliabilism and virtue responsibilism. Virtue reliabilists think of intellectual virtues as reliable cognitive faculties or abilities, while virtue responsibilists conceive of them as good intellectual character traits. I argue that responsibilist character virtues sometimes satisfy the conditions of a reliabilist conception of intellectual virtue, and that consequently virtue reliabilists, and reliabilists in general, must pay closer attention to matters of intellectual character. This leads to several new questions and (...) (...) challenges for any reliabilist epistemology. (shrink)
A tempting argument for human rationality goes like this: it is more conducive to survival to have true beliefs than false beliefs, so it is more conducive to survival to use reliable belief-forming strategies than unreliable ones. But reliable strategies are rational strategies, so there is a selective advantage to using rational strategies. Since we have evolved, we must use rational strategies. In this paper I argue that some criticisms of this argument offered by Stephen Stich fail because they rely (...) on unsubstantiated interpretations of some results from experimental psychology. I raise two objections to the argument: (i) even if it is advantageous to use rational strategies, it does not follow that we actually use them; and (ii) natural selection need not favor only or even primarily reliable belief-forming strategies. (shrink)
In das paper 1 ccmstder the rehabday condaton in Atm PlanungaS's proper functionabst account of eptstemtc warrant I begm by reviewing m some detail the features of the rehabdity condition as Planunga lias aruculated a From there, 1 consider what is needed to ground or secure the sort of rehability whzch Plantinga has m mind, and argue that what is needed is a significant causai condam which has generally been overlooked Then, after identifying eight verstons of the relevant sort of (...) reltabdity, I exam me each alternative as to whether as requirement, along with PlanungaSs other proposed conditions, would give us a sausfactory account of epis tenuc warrant I conclude that there is bale to no hope of formulatmg a rehabilay condaion that would yield a sattsfactory analysts of the sort Plantinga destres. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
A recent study of moral intuitions, performed by Joshua Greene and a group of researchers at Princeton University, has recently received a lot of attention. Greene and his collaborators designed a set of experiments in which subjects were undergoing brain scanning as they were asked to respond to various practical dilemmas. They found that contemplation of some of these cases (cases where the subjects had to imagine that they must use some direct form of violence) elicited greater activity in certain (...) areas of the brain associated with emotions compared with the other cases. It has been argued (e.g., by Peter Singer) that these results undermine the reliability of our moral intuitions, and therefore provide an objection to methods of moral reasoning that presuppose that they carry an evidential weight (such as the idea of reflective equilibrium). I distinguish between two ways in which Greene's findings lend support for a sceptical attitude towards intuitions. I argue that, given the first version of the challenge, the method of reflective equilibrium can easily accommodate the findings. As for the second version of the challenge, I argue that it does not so much pose a threat specifically to the method of reflective equilibrium but to the idea that moral claims can be justified through rational argumentation in general. (shrink)
We think of logic as objective. We also think that we are reliable about logic. These views jointly generate a puzzle: How is it that we are reliable about logic? How is it that our logical beliefs match an objective domain of logical fact? This is an instance of a more general challenge to explain our reliability about a priori domains. In this paper, I argue that the nature of this challenge has not been properly understood. I explicate the (...) challenge both in general and for the particular case of logic. I also argue that two seemingly attractive responses – appealing to a faculty of rational insight or to the nature of concept possession – are incapable of answering the challenge. (shrink)
This paper explores what constitutes reliability in persons, particularly intellectual reliability. It considers global reliability , the overall reliability of persons, encompassing both the theoretical and practical realms; sectorial reliability , that of a person in a subject-matter (or behavioral) domain; and focal reliability , that of a particular element, such as a belief. The paper compares reliability with predictability of the kind most akin to it and distinguishes reliability as an intellectual (...) virtue from reliability as an intellectual power. The paper also connects reliability with insight, reasoning, knowledge, and trust. It is argued that insofar as reliability is an intellectual virtue, it must meet both external standards of correctitude and internal standards of justification. (shrink)
‘Responsibilist' approaches to epistemology link knowledge and justification with epistemically responsible belief management, where responsible management is understood to involve an essential element of guidance by recognized epistemic norms. By contrast, reliabilist approaches stress the de facto reliability of cognitive processes, rendering epistemic self-consciousness as inessential. I argue that, although an adequate understanding of human knowledge must make room for both responsibility and reliability, philosophers have had a hard time putting them together, largely owing to a tendency, on (...) the part of responsibilists, to adopt an overly demanding, hyperintellectualized conception of what epistemic responsibility demands. I trace this tendency towards hyper-intellectualism to a wish to meet scepticism head on, a wish that enforces adherence to a particular model of the structure of epistemic justification. I argue that a more humanly reasonable conception of epistemic justification suggests an alternative model. With this model in hand, we can both deflect sceptical problems and combine responsibility with reliability in a satisfying way. Philosophical Papers Vol. 37 (1) 2008: pp. 1-26. (shrink)
There is surprising evidence that introspection of our phenomenal states varies greatly between individuals and within the same individual over time. This puts pressure on the notion that introspection gives reliable access to our own phenomenology: introspective unreliability would explain the variability, while assuming that the underlying phenomenology is stable. I appeal to a body of neurocomputational, Bayesian theory and neuroimaging findings to provide an alternative explanation of the evidence: though some limited testing conditions can cause introspection to be unreliable, (...) mostly it is our phenomenology itself that is variable. With this account of phenomenal variability, the occurrence of the surprising evidence can be explained while generally retaining introspective reliability. (shrink)
Some twenty years ago, Bogen and Woodward challenged one of the fundamental assumptions of the received view, namely the theory-observation dichotomy and argued for the introduction of the further category of scientific phenomena. The latter, Bogen and Woodward stressed, are usually unobservable and inferred from what is indeed observable, namely scientific data. Crucially, Bogen and Woodward claimed that theories predict and explain phenomena, but not data. But then, of course, the thesis of theory-ladenness, which has it that our observations are (...) influenced by the theories we hold, cannot apply. On the basis of two case studies, I want to show that this consequence of Bogen and Woodward’s account is rather unrealistic. More importantly, I also object against Bogen and Woodward’s view that the reliability of data, which constitutes the precondition for data-to-phenomena inferences, can be secured without the theory one seeks to test. The case studies I revisit have figured heavily in the publications of Bogen and Woodward and others: the discovery of weak neutral currents and the discovery of the zebra pattern of magnetic anomalies. I show that, in the latter case, data can be ignored if they appear to be irrelevant from a particular theoretical perspective (TLI) and that, in the former case, the tested theory can be critical for the assessment of the reliability of the data (TLA). I argue that both TLI and TLA are much stronger senses of theory-ladenness than the classical thesis and that neither TLI nor TLA can be accommodated within Bogen and Woodward’s account. (shrink)
We are reliable about logic in the sense that we by-and-large believe logical truths and disbelieve logical falsehoods. Given that logic is an objective subject matter, it is difficult to provide a satisfying explanation of our reliability. This generates a significant epistemological challenge, analogous to the well-known Benacerraf-Field problem for mathematical Platonism. One initially plausible way to answer the challenge is to appeal to evolution by natural selection (or to a related mechanism). The central idea is that being able (...) to correctly deductively reason conferred a heritable survival advantage upon our ancestors. However, there are several arguments that purport to show that evolutionary accounts cannot even in principle explain how it is that we are reliable about logic. In this paper, I address these arguments. I show that there is no general reason to think that evolutionary accounts are incapable of explaining our reliability about logic. (shrink)
This paper concerns various competing views on the nature of perceptual justification. Various thought experiments that motivate various views are discussed. Once reliabilism is rejected and some form of internalism is instead embraced, the following issue arises: must an internalist nevertheless require that perceptual justification involve the possession of evidence for the reliability of our perceptual processes? Matthias Steup answers in the affirmative, espousing what he calls internalist reliabilism. Some problems are raised for this form of internalism.
Reliabilism has come under recent attack for its alleged inability to account for the value we typically ascribe to knowledge. It is charged that a reliably-produced true belief has no more value than does the true belief alone. I reply to these charges on behalf of reliabilism; not because I think reliabilism is the correct theory of knowledge, but rather because being reliably-produced does add value of a sort to true beliefs. The added value stems from the fact that a (...) reliably-held belief is non-accidental in a particular way. While it is widely acknowledged that accidentally true beliefs cannot count as knowledge, it is rarely questioned why this should be so. An answer to this question emerges from the discussion of the value of reliability; an answer that holds interesting implications for the value and nature of knowledge. (shrink)
Some time ago, F. P. Ramsey (1960) suggested that knowledge is true belief obtained by a reliable process. This suggestion has only recently begun to attract serious attention. In 'Discrimination and Perceptual Knowledge', Alvin Goldman (1976) argues that a person has knowl- edge only if that person's belief has been formed as a result of a reliable cognitive mechanism. In Belief, Truth, and Knowledge, David Arm- strong (1973) argues that one has knowledge only if one's belief is a comPletely reliable (...) sign of the truth of the proposition believed. On both of these theories, the reliability of one's belief is a necessary condition of that belief's being an instance of knowledge. These reliability theories have another interesting feature in common, namely, that neither of them explicitly requires or includes the traditional justification requirement for knowledge. Reliability has taken over the role of justification. This naturally leads to the question whether reliability and justification are related in some philosophically interes- ting fashion. In this paper I shall investigate this question. The result will be a positive proposal to the effect that justified belief is reliable belief. This result, in turn, explains why reliability can take over the role of justification in an account of knowledge. Moreover, the identification of justification with reliability constitutes a step toward the naturalization of normative epistemological concepts. (shrink)
Critics of reliability theories of epistemic justificationoften claim that the `generality problem' is an insurmountabledifficulty for such theories. The generality problem is theproblem of specifying the level of generality at which abelief-forming process is to be described for the purposeof assessing its reliability. This problem is not asintractable as it seems. There are illuminating solutionsto analogous problems in the ethics literature. Reliabilistsought to attend to utilitarian approaches to choices betweeninfinite utility streams; they also ought to attend towelfarist approaches (...) to social choice situations that donot demand full aggregation of individual welfares.These analogies suggest that the traditional `single number'approach to reliability is misguided. I argue that a newapproach – the `vector reliability' approach – is preferable.Vector reliability theories associate target beliefs withreliability vectors – that is, structured collections ofreliability numbers – and construct criteria of epistemicjustification that appeal to these vectors. The bulk of thetheoretical labor involved in a reliability account of epistemicjustification is thus transferred from picking a uniquereliability number to constructing a plausible criterionof epistemic justification. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
In computer simulations of physical systems, the construction of models is guided, but not determined, by theory. At the same time simulations models are often constructed precisely because data are sparse. They are meant to replace experiments and observations as sources of data about the world; hence they cannot be evaluated simply by being compared to the world. So what can be the source of credibility for simulation models? I argue that the credibility of a simulation model comes not only (...) from the credentials supplied to it by the governing theory, but also from the antecedently established credentials of the model building techniques employed by the simulationists. In other words, there are certain sorts of model building techniques which are taken, in and of themselves, to be reliable. Some of these model building techniques, moreover, incorporate what are sometimes called “falsifications.” These are contrary-to-fact principles that are included in a simulation model and whose inclusion is taken to increase the reliability of the results. The example of a falsification that I consider, called artificial viscosity, is in widespread use in computational fluid dynamics. Artificial viscosity, I argue, is a principle that is successfully and reliably used across a wide domain of fluid dynamical applications, but it does not offer even an approximately “realistic” or true account of fluids. Artificial viscosity, therefore, is a counter-example to the principle that success implies truth – a principle at the foundation of scientific realism. It is an example of reliability without truth. (shrink)
Many solutions of the Goodman paradox have been proposed but so far no agreement has been reached about which is the correct solution. However, I will not contribute here to the discussion with a new solution. Rather, I will argue that a solution has been in front of us for more than two hundred years because a careful reading of Hume’s account of inductive inferences shows that, contrary to Goodman’s opinion, it embodies a correct solution of the paradox. Moreover, the (...) account even includes a correct answer to Mill’s question of why in some cases a single instance is sufficient for a complete induction, since Hume gives a well-supported explanation of this reliability phenomenon. The discussion also suggests that Bayesian theory by itself cannot explain this phenomenon. Finally, we will see that Hume’s explanation of the reliability phenomenon is surprisingly similar to the explanation given lately by a number of naturalistic philosophers in their discussion of the Goodman paradox. (shrink)