Aesthetics in a Multicultural Age examines a variety of significant multidisciplinary and multicultural topics within the subject of aesthetics. Addressing the vexed relation of the arts and criticism to current political and cultural concerns, the contributors to this volume attempt to bridge the two decades-old gap between scholars and critics who hold conflicting views of the purposes of art and criticism. By exploring some of the ways in which global migration and expanding ethnic diversity are affecting cultural productions and prompting (...) reassessment of the nature and role of aesthetic discourse, this volume provides a new evaluation of aesthetic ideas and practices within contemporary arts and letters. (shrink)
Richard Jeffrey is beyond dispute one of the most distinguished and influential philosophers working in the field of decision theory and the theory of knowledge. His work is distinctive in showing the interplay of epistemological concerns with probability and utility theory. Not only has he made use of standard probabilistic and decision theoretic tools to clarify concepts of evidential support and informed choice, he has also proposed significant modifications of the standard Bayesian position in order that it provide a (...) better fit with actual human experience. Probability logic is viewed not as a source of judgment but as a framework for explaining the implications of probabilistic judgments and their mutual compatability. This collection of essays spans a period of some 35 years and includes what have become some of the classic works in the literature. There is also one completely new piece, while in many instances Jeffrey includes afterthoughts on the older essays. (shrink)
This brief paperback is designed for symbolic/formal logic courses. It features the tree method proof system developed by Jeffrey. The new edition contains many more examples and exercises and is reorganized for greater accessibility.
Logicism Lite counts number‐theoretical laws as logical for the same sort of reason for which physical laws are counted as as empirical: because of the character of the data they are responsible to. In the case of number theory these are the data verifying or falsifying the simplest equations, which Logicism Lite counts as true or false depending on the logical validity or invalidity of first‐order argument forms in which no numbertheoretical notation appears.
Isaac Levi and I have different views of probability and decision making. Here, without addressing the merits, I will try to answer some questions recently asked by Levi (1985) about what my view is, and how it relates to his.
In Richard Bradley's book, Decision Theory with a Human Face (2017), we have selected two themes for discussion. The first is the Bolker-Jeffrey (BJ) theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expected utility (EU) theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais and (...) Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires (...) that revised beliefs incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
Certain hypotheses cannot be directly confirmed for theoretical, practical, or moral reasons. For some of these hypotheses, however, there might be a workaround: confirmation based on analogical reasoning. In this paper we take up Dardashti, Hartmann, Thébault, and Winsberg’s (in press) idea of analyzing confirmation based on analogical inference Baysian style. We identify three types of confirmation by analogy and show that Dardashti et al.’s approach can cover two of them. We then highlight possible problems with their model as a (...) general approach to analogical inference and argue that these problems can be avoided by supplementing Bayesian update with Jeffrey conditionalization. (shrink)
I show that David Lewis’s principal principle is not preserved under Jeffrey conditionalization. Using this observation, I argue that Lewis’s reason for rejecting the desire as belief thesis and Adams’s thesis applies also to his own principal principle. 1 Introduction2 Adams’s Thesis, the Desire as Belief Thesis, and the Principal Principle3 Jeffrey Conditionalization4 The Principal Principles Not Preserved under Jeffrey Conditionalization5 Inadmissible Experiences.
This essay explains Jeffrey Friedman's two fundamental and persistent philosophical errors concerning the libertarian conception of liberty and the lack of a "justification‟ of libertarianism. It is ironic that Friedman himself is thereby revealed to be guilty of both an “a priori” anti-libertarianism and an anti-libertarian “straddle.” Critical-rationalist, proactive-imposition-minimising libertarianism remains completely unchallenged by him.
Some arguments are good; others are not. How can we tell the difference? This article advances three proposals as a partial answer to this question. The proposals are keyed to arguments conditioned by different degrees of uncertainty: mild, where the argument’s premises are hedged with point-valued probabilities; moderate, where the premises are hedged with interval probabilities; and severe, where the premises are hedged with non-numeric plausibilities such as ‘very likely’ or ‘unconfirmed’. For mild uncertainty, the article proposes to apply a (...) principle referred to as ‘Jeffrey’s rule’, for the principle is a generalization of Jeffrey conditionalization. For moderate uncertainty, the proposal is to extend Jeffrey’s rule for use with probability intervals. For severe uncertainty, the article proposes that even when lack of probabilistic information prevents the application of Jeffrey’s rule, the rule can be adapted to these conditions with the aid of a suitable plausibility measure. Together, the three proposals introduce an approach to argument evaluation that complements established frameworks for evaluating arguments: deductive soundness, informal logic, argumentation schemes, pragma-dialectics, and Bayesian inference. Nevertheless, this approach can be looked at as a generalization of the truth and validity conditions of the classical criterion for sound argumentation. (shrink)
This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.
Jonathan Weisberg has argued that Jeffrey Conditioning is inherently “anti-holistic” By this he means, inter alia, that JC does not allow us to take proper account of after-the-fact defeaters for our beliefs. His central example concerns the discovery that the lighting in a room is red-tinted and the relationship of that discovery to the belief that a jelly bean in the room is red. Weisberg’s argument that the rigidity required for JC blocks the defeating role of the red-tinted light (...) rests on the strong assumption that all posteriors within the distribution in this example are rigid on a partition over the proposition that the jelly bean is actually red. But individual JC updates of propositions do not require such a broad rigidity assumption. Jeffrey conditionalizers should consider the advantages of a modest project of targeted updating focused on particular propositions rather than seeking to update the entire distribution using one obvious partition. Although Weisberg’s example fails to show JC to be irrelevant or useless, other problems he raises for JC (the commutativity and inputs problems) remain and actually become more pressing when we recognize the important role of background information. (shrink)
Sandra Field, Jeffrey Flynn, Stephen Macedo, Longxi Zhang, and Martin Powers discussed Powers’ book China and England: The Preindustrial Struggle for Social Justice in Word and Image at the American Philosophical Association’s 2020 Eastern Division meeting in Philadelphia. The panel was sponsored by the APA’s “Committee on Asian and Asian-American Philosophers and Philosophies” and organized by Brian Bruya.
In his introduction, Jeffrey Metzger states that “at some point in the past 20 or 30 years … Nietzsche’s name [became] no longer associated primarily with nihilism” (1). Metzger is pointing to the increasing contemporary scholarly interest in Nietzsche’s epistemology, naturalism, and metaethics. The worthy aim of this volume is to ask us to examine once again the underlying philosophical problem to which these views are a response, namely, nihilism. This volume helpfully reminds us that Nietzsche’s philosophical motivation still (...) requires clarification, and that we can only fully understand Nietzsche’s particular views by grasping Nietzsche’s fundamental philosophical aims.As with so many edited volumes on .. (shrink)
Bayesian decision theory can be viewed as the core of psychological theory for idealized agents. To get a complete psychological theory for such agents, you have to supplement it with input and output laws. On a Bayesian theory that employs strict conditionalization, the input laws are easy to give. On a Bayesian theory that employs Jeffrey conditionalization, there appears to be a considerable problem with giving the input laws. However, Jeffrey conditionalization can be reformulated so that the problem (...) disappears, and in fact the reformulated version is more natural and easier to work with on independent grounds. (shrink)
There are cases of ineffable learning — i. e., cases where an agent learns something, but becomes certain of nothing that she can express — where it is rational to update by Jeffrey conditionalization. But there are likewise cases of ineffable learning where updating by Jeffrey conditionalization is irrational. In this paper, we first characterize a novel class of cases where it is irrational to update by Jeffrey conditionalization. Then we use the d-separation criterion to develop a (...) causal understanding of when and when not to Jeffrey conditionalize that bars updating by Jeffrey conditionalization in these cases. Finally, we reflect on how the possibility of so-called “unfaithful” causal systems bears on the normative force of the causal updating norm that we advocate. (shrink)
We continue the investigations initiated in the recent papers where Bayes logics have been introduced to study the general laws of Bayesian belief revision. In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using the Bayes rule. In this paper we take the more general Jeffrey formula as a conditioning device and study the corresponding modal logics that we call Jeffrey logics, focusing mainly on the countable case. The containment (...) relations among these modal logics are determined and it is shown that the logic of Bayes and Jeffrey updating are very close. It is shown that the modal logic of belief revision determined by probabilities on a finite or countably infinite set of elementary propositions is not finitely axiomatizable. The significance of this result is that it clearly indicates that axiomatic approaches to belief revision might be severely limited. (shrink)
Jeffrey conditionalization is a rule for updating degrees of belief in light of uncertain evidence. It is usually assumed that the partitions involved in Jeffrey conditionalization are finite and only contain positive-credence elements. But there are interesting examples, involving continuous quantities, in which this is not the case. Q1 Can Jeffrey conditionalization be generalized to accommodate continuous cases? Meanwhile, several authors, such as Kenny Easwaran and Michael Rescorla, have been interested in Kolmogorov’s theory of regular conditional distributions (...) as a possible framework for conditional probability which handles probability-zero events. However the theory faces a major shortcoming: it seems messy and ad hoc. Q2 Is there some axiomatic theory which would justify and constrain the use of rcds, thus serving as a possible foundation for conditional probability? These two questions appear unrelated, but they are not, and this paper answers both. We show that when one appropriately generalizes Jeffrey conditionalization as in Q1, one obtains a framework which necessitates the use of rcds. It is then a short step to develop a general theory which addresses Q2, which we call the theory of extensions. The theory is a formal model of conditioning which recovers Bayesian conditionalization, Jeffrey conditionalization, and conditionalization via rcds as special cases. (shrink)
Subjective Probability: The Real Thing is the last book written by the late Richard Jeffrey, a key proponent of the Bayesian interpretation of probability.Bayesians hold that probability is a mental notion: saying that the probability of rain is 0.7 is just saying that you believe it will rain to degree 0.7. Degrees of belief are themselves cashed out in terms of bets—in this case you consider 7:3 to be fair odds for a bet on rain. There are two extreme (...) Bayesian positions. Strict subjectivists think that an agent can adopt whatever degrees of belief she likes, as long as they satisfy the axioms of probability. Thus your degree of belief in rain and degree of belief in no rain must sum to one but are otherwise unconstrained. At the other extreme, objectivists claim that an agent's background knowledge considerably narrows down the choice of appropriate degrees of belief. In particular, if you know only that the frequency of rain is 0.7 then you should believe it will rain to degree 0.7; if you know absolutely nothing about the weather then you should set your degree of belief in rain to be 0.5; in neither of these cases is there room for subjective choice of degree of belief. In this book, Jeffrey advocates what is sometimes called empirically-based subjectivism, a position that lies between the two extremes of strict subjectivism and objectivism. According to this position, knowledge of frequencies constrains degree of belief, but lack of knowledge does not impose any constraints, so that if you know nothing about the weather you may adopt any degree of belief in rain you like.1The aim of the book is not so much to justify this point of view as to provide a comprehensive exposition of probability theory from the …. (shrink)
Studies of categorical induction typically examine how belief in a premise (e.g., Falcons have an ulnar artery) projects on to a conclusion (e.g., Robins have an ulnar artery). We study induction in cases in which the premise is uncertain (e.g., There is an 80% chance that falcons have an ulnar artery). Jeffrey's rule is a normative model for updating beliefs in the face of uncertain evidence. In three studies we tested the descriptive validity of Jeffrey's rule and a (...) related probability theorem, the rule of total probability. Although these rules provided good approximations to mean judgments in some cases, the results from regression and correlation analyses suggest that participants focus on the parts of these rules that are associated with the highest overall probability. We relate our findings to rational models of judgment. (shrink)
It has been argued that if the rigidity condition is satisfied, a rational agent operating with uncertain evidence should update her subjective probabilities by Jeffrey conditionalization or else a series of bets resulting in a sure loss could be made against her. We show, however, that even if the rigidity condition is satisfied, it is not always safe to update probability distributions by JC because there exist such sequences of non-misleading uncertain observations where it may be foreseen that an (...) agent who updates her subjective probabilities by JC will end up nearly certain that a false hypothesis is true. We analyze the features of JC that lead to this problem, specify the conditions in which it arises and respond to potential objections. (shrink)
Suppose that several individuals who have separately assessed prior probability distributions over a set of possible states of the world wish to pool their individual distributions into a single group distribution, while taking into account jointly perceived new evidence. They have the option of first updating their individual priors and then pooling the resulting posteriors or first pooling their priors and then updating the resulting group prior. If the pooling method that they employ is such that they arrive at the (...) same final distribution in both cases, the method is said to be externally Bayesian, a property first studied by Madansky . We show that a pooling method for discrete distributions is externally Bayesian if and only if it commutes with Jeffrey conditioning, parameterized in terms of certain ratios of new to old odds, as in Wagner , rather than in terms of the posterior probabilities of members of the disjoint family of events on which such conditioning originates. (shrink)
A very important event took place on January 15, 2017. On that day, the Jeffrey Beall blog was silently, and suddenly, shut down by Beall himself. A profoundly divisive and controversial site, the Beall blog represented an existential threat to those journals and publishers that were listed there. On the other hand, the Beall blog was a ray of hope to critics of bad publishing practices that a culture of public shaming was perhaps the only way to rout out (...) those journals—and their editors—and publishers who did not respect basic publishing ethical principles and intrinsic academic values. While members of the former group vilified Beall and his blog, members of the latter camp tried to elevate it to the level of policy. Split by extreme polar forces, for reasons still unknown to the public, Beall deliberately shut down his blog, causing some academic chaos among global scholars, including to the open access movement. (shrink)
Oaksford & Chater (O&C) begin in the halfway Bayesian house of assuming that minor premises in conditional inferences are certain. We demonstrate that this assumption is a serious limitation. They additionally suggest that appealing to Jeffrey's rule could make their approach more general. We present evidence that this rule is not limited enough to account for actual probability judgements.
Jeffrey conditioning tells an agent how to update her priors so as to grant a given probability to a particular event. Weighted averaging tells an agent how to update her priors on the basis of testimonial evidence, by changing to a weighted arithmetic mean of her priors and another agent’s priors. We show that, in their respective settings, these two seemingly so different updating rules are axiomatized by essentially the same invariance condition. As a by-product, this sheds new light (...) on the question how weighted averaging should be extended to deal with cases when the other agent reveals only parts of her probability distribution. The combination of weighted averaging and Jeffrey conditioning is a comprehensive updating rule to deal with such cases, which is again axiomatized by invariance under embedding. We conclude that, even though one may dislike Jeffrey conditioning or weighted averaging, the two make a natural pair when a policy for partial testimonial evidence is needed. (shrink)
Can we respond to the charge that human rights are a Western product without relinquishing human rights altogether? Can we be sensitive not only to the dominant voices in the non-Western world but also to the "margins of the margins"? Can the academic discussion on human rights be more attuned not only to scholarly arguments but also to "human rights activism and struggles for human rights"? Can it also be attuned to the fact of the new "globalizing modernity"? In Reframing (...) the Intercultural Dialogue on Human Rights, Jeffrey Flynn thoughtfully addresses these hugely important and challenging... (shrink)
Jeffrey Church's book Nietzsche's Culture of Humanity is a flawed but nonetheless significant contribution to the still fairly scant Anglophone literature on Nietzsche's early works. The book argues for two major intertwined theses and a third, less central one. The first thesis is that Nietzsche distinguishes between two types or layers of culture: national culture, which Nietzsche characterizes in §1 of the first essay of UM as "unity of artistic style in all the expressions of the life of a (...) people," and cosmopolitan culture, which consists in the "republic of genius" that stretches across nations and eras. Church's second thesis, advertised in the book's subtitle, is that the early Nietzsche is not as much of an... (shrink)
A glance at the sky raises my probability of rain to .7. As it happens, the conditional probabilities of each state given rain remain the same, and similarly for their conditional probabilities given no rain. As Jeffrey (1983, Ch. 11) points out, my new distribution P2 is therefore fixed by the law of total probability. For example, P2(RC) = P2(RC | R)P2(R)+P2(RC | ¯.
Richard Jeffrey's generalization of Bayes' rule of conditioning follows, within the theory of belief functions, from Dempster's rule of combination and the rule of minimal extension. Both Jeffrey's rule and the theory of belief functions can and should be construed constructively, rather than normatively or descriptively. The theory of belief functions gives a more thorough analysis of how beliefs might be constructed than Jeffrey's rule does. The inadequacy of Bayesian conditioning is much more general than Jeffrey's (...) examples of uncertain perception might suggest. The ``parameter α '' that Hartry Field has introduced into Jeffrey's rule corresponds to the "weight of evidence" of the theory of belief functions. (shrink)
A simple rule of probability revision ensures that the final result of a sequence of probability revisions is undisturbed by an alteration in the temporal order of the learning prompting those revisions. This Uniformity Rule dictates that identical learning be reflected in identical ratios of certain new-to-old odds, and is grounded in the old Bayesian idea that such ratios represent what is learned from new experience alone, with prior probabilities factored out. The main theorem of this paper includes as special (...) cases Field's theorem on commuting probability-kinematical revisions and the equivalence of two strategies for generalizing Jeffrey 's solution to the old evidence problem to the case of uncertain old evidence and probabilistic new explanation. (shrink)
This paper is partly a tribute to Richard Jeffrey, partly a reflection on some of his writings, The Logic of Decision in particular. I begin with a brief biography and some fond reminiscences of Dick. I turn to some of the key tenets of his version of Bayesianism. All of these tenets are deployed in my discussion of his response to the St. Petersburg paradox, a notorious problem for decision theory that involves a game of infinite expectation. Prompted by (...) that paradox, I conclude with some suggestions of avenues for future research. (shrink)
Jeffrey Gray’s Consciousness: Creeping up on the Hard Problem will be enjoyed by everyone interested in consciousness. Gray, a neuropsychologist, eloquently summarizes significant experimental results on consciousness and, more importantly, explains both how these results interrelate and how they constrain potential theories of consciousness. He also uses these results to build a novel, fascinating theory of what consciousness does and does not do. Throughout the work Gray’s accessible presentation remains deeply respectful of psychologists, neuroscientists, and philosophers’ approaches to consciousness. (...) In this respect, Gray’s book is an ideal work for an interdisciplinary audience. Sadly, Gray died three months before the publication of this excellent work. (shrink)
This paper provides a conceptual exploration of the implication of Jeffrey’s rule of belief revision to account for rule-following behavior in a game-theoretic framework. Jeffrey’s rule reflects the fact that in many cases learning something new does not imply that one has full assurance about the true content of the information. In other words, the same information may be both perceived and interpreted in several different ways. I develop an account of rule-following behavior according to which, in the (...) context of strategic interactions, following a rule is defined by two conditions. First, that agents must frame the interaction in a sufficiently similar way and be aware of the same salient properties, i.e. they must have the same partition of the event. Second, they must ascribe to others the same revised probabilities to what they take to be the common partition. In a game-theoretic framework, this also indicates that rule-following behavior cannot be identified merely to the existence of a common prior. (shrink)
Jeffrey Stout addresses two of the main criticisms of liberal democracy by its contemporary neotraditionalist Christian critics: that liberal democracy is destructive of social tradition, and thereby of virtue in the citizenry, and that liberal democracy is inherently secular, committed to expunging religious voices from the public arena. I judge that Stout effectively answers these charges: liberal democracy has its own tradition, it cultivates the virtues relevant to that, and it is not inherently hostile to piety. What Stout does (...) not do, I suggest, is take the next step of showing, positively, that Christianity can and should affirm the substance of liberal democratic society. This is due, in good measure, to the fact that Stout never tells us, except in off-hand comments, what he takes the substance of liberal democracy to be. And this, in turn, is due to his way of employing pragmatism: he uses pragmatism to give an account of human society generally, not of liberal democratic society. I raise some questions about the general account that pragmatism gives of human society, and thus about the account that it would give of liberal democracy. (shrink)
Jeffrey Howard has recently argued that entrapment and similar phenomena are wrongful - and wrong the induced agent - because they violate a regulative obligation of respect for the first moral power According to Howard, this obligation grounds a duty not to foreseeably increase the likelihood that another agent acts wrongly While I accept the existence of the more fundamental obligation, I try to show that it doesn't support DUTY. Therefore, it doesn't support the wrongfulness of entrapment and similar (...) phenomena. I do this by offering a more nuanced account of FMP's value, and one more attuned to certain liberal thoughts about agency. I then suggest a fairly minimalist picture of what respect for FMP involves, but close in a constructive spirit by sketching an alternative argument for DUTY based on the telos of FMP. (shrink)
In Zadig, published in 1748, Voltaire wrote of “the great principle that it is better to run the risk of sparing the guilty than to condemn the innocent.” At about the same time, Blackstone noted approvingly that “the law holds that it is better that ten guilty persons escape, than that one innocent suffer.” In 1824, Thomas Fielding cited the principle as an Italian proverb and a maxim of English law. John Stuart Mill endorsed it in an address to Parliament (...) in 1868. General acceptance of this maxim continues into our own period, yet it is difficult to find systematic attempts to defend the maxim. It is treated as a truism in no need of defense. But the principle within it is not at all obvious; and since it undergirds many of our criminal justice policies, we should be sure that it is justifiable. First, however, we must clarify what the principle means. (shrink)