For more than a half-century, evidence scholars have been exploring whether the criminal standard of proof can be grounded in decision theory. Such grounding would require the emergence of a social consensus about the utilities to be assigned to the four outcomes at trial. Significant disagreement remains, even among legal scholars, about the relative desirability of those outcomes and even about the formalisms for manipulating their respective utilities. We attempt to diagnose the principal reasons for this dissensus and to suggest (...) ways in which a broadly shared evaluation might be forged, both with respect to the appropriate equations for defining the standard of proof and with respect to the appropriate utilities to associate with the various trial outcomes. Where consensus cannot be forged, we hold that remaining differences can probably be finessed. We also suggest ways to elicit the utilities of individuals on these matters so as to avoid the usual flaws of such surveys. Along the way, we note a). the disproportionate role that the Blackstone ratio of errors continues to play in appraisals of the utilities of trial outcomes (despite its unintelligibility in the context of utilities) and b). the persisting belief -for which there is no theoretical basis-that every plausible assignment of utilities will inevitably result in a very high standard of proof. Finally, we examine some of the technical features associated with a proposed rank ordering of the utilities of trial outcomes. (shrink)
This paper propounds the following theses: 1). that the traditional focus on the Blackstone ratio of errors as a device for setting the criminal standard of proof is ill-conceived, 2). that the preoccupation with the rate of false convictions in criminal trials is myopic, and 3). that the key ratio of interest, in judging the political morality of a system of criminal justice, involves the relation between the risk that an innocent person runs of being falsely convicted of a serious (...) crime and the risk of being criminally victimized by someone who was falsely acquitted. (shrink)
This paper seeks to show that Achinstein's recent attempt to establish that both parties to the wave-particle debate in 19th-century optics were Bayesian conditionalizers forces us to ignore several of the key conceptual issues in that controversy-not least the role of the vera causa principle and, more important still, the role of positive evidence in securing acceptance for the wave theory of light.
Normative naturalism is a view about the status of epistemology and philosophy of science; it is a meta-epistemology. It maintains that epistemology can both discharge its traditional normative role and nonetheless claim a sensitivity to empirical evidence. The first sections of this essay set out the central tenets of normative naturalism, both in its epistemic and its axiological dimensions; later sections respond to criticisms of that species of naturalism from Gerald Doppelt, Jarrett Leplin and Alex Rosenberg.
It is widely supposed that the scientists in any field use identical standards for evaluating theories. Without such unity of standards, consensus about scientific theories is supposedly unintelligible. However, the hypothesis of uniform standards can explain neither scientific disagreement nor scientific innovation. This paper seeks to show how the presumption of divergent standards (when linked to a hypothesis of dominance) can explain agreement, disagreement and innovation. By way of illustrating how a rational community with divergent standards can encourage innovation and (...) eventually reach consensus, recent developments in geophysics are discussed at some length. (shrink)
For positivists and post-positivists alike, methodology had a decidedly suspect status. Positivists saw methodological rules as stipulative conventions, void of any empirical content. Post-positivists (especially naturalistic ones) see such rules as mere descriptions of how research is conducted, carrying no normative force. It is argued here that methodological rules are fundamentally empirical claims, but ones which have significant normative bite. Methodology is thus divorced both from foundationalism and conventionalism.
Intuitionistic meta-methodologies, which abound in recent philosophy of science, take the criterion of success for theories of scientific rationality to be whether those theories adequately explicate our intuitive judgments of rationality in exemplary cases. Garber's (1985) critique of Laudan's (1977) intuitionistic meta-methodology, correct as far as it goes, does not go far enough. Indeed, Garber himself advocates a form of intuitionistic meta-methodology; he merely denies any special role for historical (as opposed to contemporary or imaginary) test cases. What all such (...) positions lack is a base from which to inform, criticize, or restructure our core methodological intuitions. To acquiesce in this is to deny that exemplary cases can serve the sort of warranting role required for intuitionism. This point is reinforced by a series of reasons for denying the warranting role of pre-analytic judgments of rationality. These reasons point the way toward an improved approach to meta-methodology. (shrink)
This essay contains a partial exploration of some key concepts associated with the epistemology of realist philosophies of science. It shows that neither reference nor approximate truth will do the explanatory jobs that realists expect of them. Equally, several widely-held realist theses about the nature of inter-theoretic relations and scientific progress are scrutinized and found wanting. Finally, it is argued that the history of science, far from confirming scientific realism, decisively confutes several extant versions of avowedly 'naturalistic' forms of scientific (...) realism. (shrink)
In this study of Auguste Comte's philosophy of science, an attempt is made to explicate his views on such methodological issues as explanation, prediction, induction and hypothesis. Comte's efforts to resolve the dual problems of demarcation and meaning led to the enunciation of principles of verifiability and predictability. Comte's hypothetico-deductive method is seen to permit conjectures dealing with unobservable entities.