The article begins by describing two longstanding problems associated with direct inference. One problem concerns the role of uninformative frequency statements in inferring probabilities by direct inference. A second problem concerns the role of frequency statements with gerrymandered reference classes. I show that past approaches to the problem associated with uninformative frequency statements yield the wrong conclusions in some cases. I propose a modification of Kyburg’s approach to the problem that yields the right conclusions. Past theories of direct inference have (...) postponed treatment of the problem associated with gerrymandered reference classes by appealing to an unexplicated notion of projectability . I address the lacuna in past theories by introducing criteria for being a relevant statistic . The prescription that only relevant statistics play a role in direct inference corresponds to the sort of projectability constraints envisioned by past theories. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to the dominant (...) paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
The article proceeds upon the assumption that the beliefs and degrees of belief of rational agents satisfy a number of constraints, including: consistency and deductive closure for belief sets, conformity to the axioms of probability for degrees of belief, and the Lockean Thesis concerning the relationship between belief and degree of belief. Assuming that the beliefs and degrees of belief of both individuals and collectives satisfy the preceding three constraints, I discuss what further constraints may be imposed on the aggregation (...) of beliefs and degrees of belief. Some possibility and impossibility results are presented. The possibility results suggest that the three proposed rationality constraints are compatible with reasonable aggregation procedures for belief and degree of belief. (shrink)
The present article illustrates a conflict between the claim that rational belief sets are closed under deductive consequences, and a very inclusive claim about the factors that are sufficient to determine whether it is rational to believe respective propositions. Inasmuch as it is implausible to hold that the factors listed here are insufficient to determine whether it is rational to believe respective propositions, we have good reason to deny that rational belief sets are closed under deductive consequences.
In a recent article, Joel Pust argued that direct inference based on reference properties of differing arity are incommensurable, and so direct inference cannot be used to resolve the Sleeping Beauty problem. After discussing the defects of Pust's argument, I offer reasons for thinking that direct inferences based on reference properties of differing arity are commensurable, and that we should prefer direct inferences based on logically stronger reference properties, regardless of arity.
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
Meta-induction, in its various forms, is an imitative prediction method, where the prediction methods and the predictions of other agents are imitated to the extent that those methods or agents have proven successful in the past. In past work, Schurz demonstrated the optimality of meta-induction as a method for predicting unknown events and quantities. However, much recent discussion, along with formal and empirical work, on the Wisdom of Crowds has extolled the virtue of diverse and independent judgment as essential to (...) maintenance of 'wise crowds'. This suggests that meta-inductive prediction methods could undermine the wisdom of the crowd inasmuch these methods recommend that agents imitate the predictions of other agents. In this article, we evaluate meta-inductive methods with a focus on the impact on a group's performance that may result from including meta-inductivists among its members. In addition to considering cases of global accessibility (i.e., cases where the judgments of all members of the group are available to all of the group's members), we consider cases where agents only have access to the judgments of other agents within their own local neighborhoods. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition that represents a (...) proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
The claims that Dr. F. H. George makes on behalf of his machines are obscurely stated. Does he claim that a machine has been made and has actually produced a kind of response which is incalculable, given the specification to which it has been built and also the prescribed conditions, what is put in for the particular performance in question? “Incalculable” does not mean that nobody has bothered to calculate, but that somebody has bothered, that the calculations show that the (...) response in question is not among the calculated ones, that it is not chance or mechanical breakdown, but a specific response to something specific among the given conditions. If he could substantiate this claim, then I would reluctantly agree that some machines are cleverer and more thoughtful than I thought possible, and some men sillier and less thoughtful, for they deliberately design machines to do they know not what. (shrink)
The objective of this study is to evaluate auditors’ perceived responsibility for fraud detection. Auditors play a critical role in managing fraud risk within organizations. Although professional standards and guidance prescribe responsibility in the area, little is known about auditors’ sense of responsibility for fraud detection, the factors affecting perceived responsibility, and how responsibility affects auditor performance. We use the triangle model of responsibility as a theoretical basis for examining responsibility and the effects of accountability, fraud type, and auditor type (...) on auditors’ perceived fraud detection responsibility. We also test how perceived responsibility affects auditor brainstorming performance given the importance of brainstorming in audits. A sample of 878 auditors participated in an experiment with accountability pressure and fraud type manipulated randomly between subjects. As predicted, accountable auditors report higher detection responsibility than anonymous auditors. We also find a significant fraud type × auditor type interaction with external auditors perceiving the most detection responsibility for financial statement fraud, while internal auditors report similar detection responsibility for all fraud types. Analysis of the triangle model’s formative links reveals that professional obligation and personal control are significantly related to responsibility, while task clarity is not. Finally, the results indicate that perceived responsibility positively affects the number of detection procedures brainstormed and partially mediates the significant accountability–brainstorming relation. (shrink)
The prevailing view among historians of science holds that Charles Darwin became a convinced transmutationist only in the early spring of 1837, after his Beagle collections had been examined by expert British naturalists. With respect to the fossil vertebrate evidence, some historians believe that Darwin was incapable of seeing or understanding the transmutationist implications of his specimens without the help of Richard Owen. There is ample evidence, however, that he clearly recognized the similarities between several of the fossil vertebrates he (...) collected and some of the extant fauna of South America before he returned to Britain. These comparisons, recorded in his correspondence, his diary and his notebooks during the voyage, were instances of a phenomenon that he later called the “law of the succession of types.” Moreover, on the Beagle, he was following a geological research agenda outlined in the second volume of Charles Lyell’s Principles of Geology, which implies that paleontological data alone could provide an insight into the laws which govern the appearance of new species. Since Darwin claims in On the Origin of Species that fossil vertebrate succession was one of the key lines of evidence that led him to question the fixity of species, it seems certain that he was seriously contemplating transmutation during the Beagle voyage. If so, historians of science need to reconsider both the role of Britain’s expert naturalists and the importance of the fossil vertebrate evidence in the development of Darwin’s ideas on transmutation. (shrink)
The point, for the 946,326th time is that people get elected to office by currying the favor of powerful interest groups. They don’t get elected for their excellence as political philosophers.Congress has consistently failed to solve some serious problems with the cost, effectiveness, and safety of pharmaceuticals. In part, this failure results from the pharmaceutical industry convincing legislators to define policy problems in ways that protect industry profits. By targeting campaign contributions to influential legislators and by providing them with selective (...) information, the industry manages to displace the public’s voice in developing pharmaceutical policy. (shrink)
Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing (...) pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting. (shrink)
Of the six basic categories which a normative ethical theory may recognize and exemplify, The first five are fairly clearly employed by kant in the "tugendlehre", But the sixth is not given adequate recognition by him. In order to establish those conclusions, One has to investigate the leading notion of the "tugendlehre", That of obligatory ends. Closely connected with that notion is kant's division of duties into perfect and imperfect ones. Consideration of a number of ways of elucidating that division (...) leads one to conclude that it is not really so sharp as kant suggests. The sphere of duty comprises a continuum of duties of wider and of narrower obligation; beyond that sphere belong many things which one morally "ought" to prize and to pursue, But which do not constitute obligatory ends for men. (shrink)
This 2004 book reconfigures the basic problem of Christian thinking - 'How can human discourse refer meaningfully to a transcendent God?' - as a twofold demand for integrity: integrity of reason and integrity of transcendence. Centring around a provocative yet penetratingly faithful re-reading of Kant's empirical realism, and drawing on an impelling confluence of contemporary thinkers Paul D. Janz argues that theology's 'referent' must be located within present empirical reality. Rigorously reasoned yet refreshingly accessible throughout, this book provides an (...) important, attentively informed alternative to the growing trends toward obscurantism, radicalization and anti-reason in many recent assessments of theological cognition, while remaining equally alert to the hazards of traditional metaphysics. In the book's culmination, epistemology and Christology converge around problems of noetic authority and orthodoxy with a kind of innovation, depth and straightforwardness that readers of theology at all levels of philosophical acquaintance will find illuminating. (shrink)
According to the paradigm of adaptive rationality, successful inference and prediction methods tend to be local and frugal. As a complement to work within this paradigm, we investigate the problem of selecting an optimal combination of prediction methods from a given toolbox of such local methods, in the context of changing environments. These selection methods are called meta-inductive strategies, if they are based on the success-records of the toolbox-methods. No absolutely optimal MI strategy exists—a fact that we call the “revenge (...) of ecological rationality”. Nevertheless one can show that a certain MI strategy exists, called “AW”, which is universally long-run optimal, with provably small short-run losses, in comparison to any set of prediction methods that it can use as input. We call this property universal access-optimality. Local and short-run improvements over AW are possible, but only at the cost of forfeiting universal access-optimality. The last part of the paper includes an empirical study of MI strategies in application to an 8-year-long data set from the Monash University Footy Tipping Competition. (shrink)
Why is it that most among the relatively few moral philosophers since Kant who, like J. S. Mill, have discussed the question whether there can be moral duties to oneself, have answered it negatively? One reason is that those philosophers have supposed that all moral action must be, inter alia, social; and they may have thought so because of their commitment to what is here called a 'corporationist' moral view. But such a conception of morality as social is objectionable because (...) it does not square with ordinary opinion and because it introduces an artificial division between types of action which go together in real life. (shrink)
Systems of logico-probabilistic reasoning characterize inference from conditional assertions that express high conditional probabilities. In this paper we investigate four prominent LP systems, the systems _O, P_, _Z_, and _QC_. These systems differ in the number of inferences they licence _. LP systems that license more inferences enjoy the possible reward of deriving more true and informative conclusions, but with this possible reward comes the risk of drawing more false or uninformative conclusions. In the first part of the paper, we (...) present the four systems and extend each of them by theorems that allow one to compute almost-tight lower-probability-bounds for the conclusion of an inference, given lower-probability-bounds for its premises. In the second part of the paper, we investigate by means of computer simulations which of the four systems provides the best balance of reward versus risk. Our results suggest that system _Z_ offers the best balance. (shrink)
Many commentators on Kant’s views on idealism, such as Kemp-Smith [1918], Strawson [1966] and, more recently, Guyer [1983 and 1987], begin by offering two choices. Either objects in space are nothing in themselves, or they exist independently of all knowers and all thought. After a fleeting, adolescent romance with idealism in the first edition of the Critique of Pure Reason Kant is often said to emerge a mature realist in the second edition. It is said that for the later Kant (...) there is a noumenal realm of things-in-themselves which metaphysically grounds the phenomena we experience. We cannot have knowledge of things-in-themselves for reality must be taken as it comes, transformed by human cognitive capacities. Kant’s empirical realism is thus a genuine realism since it involves an ontologically separate mind-independent reality; yet it is an empirical realism because reality is accessible only by means of its causal impact on the senses. Moreover Kant is still an idealist since he holds that the nature of reality as experienced by humans is dependent on their cognitive structure. (shrink)
By applying markedness to Semitic morphology in a rigorous manner, this book brings to bear a venerable linguistic construct on a persistent philological crux, in order to achieve deeper clarity in the structures and workings of Canaanite and Hebrew verbs.
This paper shows that, if the performance of the economy is independent of the identities of individuals, then many welfare criteria yield sets of optimal social states that are equal to the Pareto optimal set. This result is proved for income distributions and extended to more general social choice problems. If the independence condition holds, then the set of optimal states is invariant to the adoption of an anonymity axiom, and to the utility information available.