By illustrating the presence and scope of the bodhisattva ideal in Theravāda Buddhist theory and practice, this article shows that some of the distinctions used to separate Mahāyāna Buddhism from Hīnayāna Buddhism are problematic, and, in particular, calls into question the commonly held theoretical model that postulates that the goal of Mahāyāna practitioners is to become buddhas by following the path of the bodhisattva (bodhisattva-yāna), whereas the goal of Hīnayāna practitioners is to become arahants by following the path of the (...) hearers of Buddha's disciples (śrāvaka-yāna). (shrink)
This brief paperback is designed for symbolic/formal logic courses. It features the tree method proof system developed by Jeffrey. The new edition contains many more examples and exercises and is reorganized for greater accessibility.
Providing another key contribution to the immensely popular field of law and economics, this book, written by the doyen of the history of economic thought in the US, explores the dynamic relationship between economics, law and polity. Combining a selection of old and new essays by Warren J. Samuels that chart a number of key themes, it provides an important commentary on the development of an academic field and demonstrates how policy is structured and manipulated by human social construction. (...) The areas covered include: the role of manufactured belief power the nature and sources of rights the construction of markets by firms and governments and the problem of continuity and change in the form of the question of the selectively defined status quo and its status the absolutist character of government, rights, markets and legal principles and the accepted ideational structure of law. The Legal-Economic Nexus is an essential read both economists and legal professionals as well as those researching the history of economic thought and the social construction of law. (shrink)
Richard Jeffrey is beyond dispute one of the most distinguished and influential philosophers working in the field of decision theory and the theory of knowledge. His work is distinctive in showing the interplay of epistemological concerns with probability and utility theory. Not only has he made use of standard probabilistic and decision theoretic tools to clarify concepts of evidential support and informed choice, he has also proposed significant modifications of the standard Bayesian position in order that it provide a (...) better fit with actual human experience. Probability logic is viewed not as a source of judgment but as a framework for explaining the implications of probabilistic judgments and their mutual compatability This collection of essays spans a period of some 35 years and includes what have become some of the classic works in the literature. There is also one completely new piece, while in many instances Jeffrey includes afterthoughts on the older essays. (shrink)
What are the elements from which the human mind is composed? What structures make up our _cognitive architecture?_ One of the most recent and intriguing answers to this question comes from the newly emerging interdisciplinary field of evolutionary psychology. Evolutionary psychologists defend a _massively modular_ conception of mental architecture which views the mind –including those parts responsible for such ‘central processes’ as belief revision and reasoning— as composed largely or perhaps even entirely of innate, special-purpose computational mechanisms or ‘modules’ that (...) have been shaped by natural selection to handle the sorts of recurrent information processing problems that confronted our hunter-gatherer forebears (Cosmides and Tooby,192; Sperber, 1994; Samuels, 1998a). (shrink)
This book argues that we have moved into a new cultural period, automodernity, which represents a social, psychological, and technological reaction to postmodernity. In fact, by showing how individual autonomy is now being generated through technological and cultural automation, Samuels posits that we must rethink modernity and postmodernity. Part of this rethinking entails stressing how the progressive political aspects of postmodernism need to be separated from the aesthetic consumption of differences in automoderntiy. Choosing culturally relevant studies of The Matrix, (...) Grand Theft Auto, Eminem and Jurassic Park, he interprets these medias through the lens of eminent theorists like Slavoj Zizek, Frederic Jameson, and Henry Jenkins. Ultimately, he argues that what defines postmodernity is the stress on social construction, secular humanism, and progressive social movements that challenge the universality and neutrality of modern reason. (shrink)
A radical and original study, The Political Psyche joins together depth psychology with politics in a way that fully reflects the discoveries made in analysis and therapy. In an attempt to show that an inner journey and a desire to fashion something practical out of passionate political convictions are linked projects, author Andrew Samuels brings an acute psychological perspective to political issues such as the distribution of wealth, the market economy, Third World development, environmentalism, and nationalism--expanding and enhancing our (...) conception of "the political". However, keeping true to his aim of creating a two-way dialogue between depth psychology and politics, Samuels also lays bare the hidden politics of the father, the male body, and men's issues in general. The Political Psyche does not collapse politics and psychology together, nor is Samuels unaware of the troubled relationship of depth psychology to the political events of the century. In the book he presents his acclaimed and cathartic work on Jung, anti-semitism and the Nazis to the wider public. The text employs a political analysis to shed a fascinating light on clinical work. Samuels conducted a large-scale international survey of analysts and psychotherapists concerning what they do when their patients/clients bring overtly political material into the clinical setting. The results, including what the respondents reveal about their own political attitudes, destabilize any preconceived notions about the political sensitivity of analysis and psychotherapy. (shrink)
Among the most pervasive and fundamental assumptions in cognitive science is that the human mind (or mind-brain) is a mechanism of some sort: a physical device com- posed of functionally speciﬁable subsystems. On this view, functional decomposition – the analysis of the overall system into functionally speciﬁable parts – becomes a central project for a science of the mind, and the resulting theories of cognitive archi- tecture essential to our understanding of human psychology.
In recent years evolutionary psychologists have developed and defended the Massive Modularity Hypothesis, which maintains that our cognitive architecture—including the part that subserves ‘central processing’ —is largely or perhaps even entirely composed of innate, domain-specific computational mechanisms or ‘modules’. In this paper I argue for two claims. First, I show that the two main arguments that evolutionary psychologists have offered for this general architectural thesis fail to provide us with any reason to prefer it to a competing picture of the (...) mind which I call the Library Model of Cognition. Second, I argue that this alternative model is compatible with the central theoretical and methodological commitments of evolutionary psychology. Thus I argue that, at present, the endorsement of the Massive Modularity Hypothesis by evolutionary psychologists is both unwarranted and unmotivated. (shrink)
Though nativist hypotheses have played a pivotal role in the development of cognitive science, it remains exceedingly obscure how they—and the debates in which they ﬁgure—ought to be understood. The central aim of this paper is to provide an account which addresses this concern and in so doing: a) makes sense of the roles that nativist theorizing plays in cognitive science and, moreover, b), explains why it really matters to the contemporary study of cognition. I conclude by outlining a range (...) of further implications of this account for current debate in cognitive science. (shrink)
has a more speciﬁc role to play in the development of Of course, the conclusion to draw is not that innateness innate cognitive structure. In particular, a common claim claims are trivially false or that they cannot be character-.
Isaac Levi and I have different views of probability and decision making. Here, without addressing the merits, I will try to answer some questions recently asked by Levi (1985) about what my view is, and how it relates to his.
Logicism Lite counts number‐theoretical laws as logical for the same sort of reason for which physical laws are counted as as empirical: because of the character of the data they are responsible to. In the case of number theory these are the data verifying or falsifying the simplest equations, which Logicism Lite counts as true or false depending on the logical validity or invalidity of first‐order argument forms in which no numbertheoretical notation appears.
Over the past few decades, reasoning and rationality have been the focus of enormous interdisciplinary attention, attracting interest from philosophers, psychologists, economists, statisticians and anthropologists, among others. The widespread interest in the topic reflects the central status of reasoning in human affairs. But it also suggests that there are many different though related projects and tasks which need to be addressed if we are to attain a comprehensive understanding of reasoning.
During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from (...) rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability.. (shrink)
"To some people, life is very simple . . . no shadings and grays, all blacks and whites. . . . Now, others of us find that good, bad, right, wrong, are many-sided, complex things. We try to see every side; but the more we see, the less sure we are.".
From a point of view like de Finetti's, what is the judgmental reality underlying the objectivistic claim that a physical magnitude X determines the objective probability that a hypothesis H is true? When you have definite conditional judgmental probabilities for H given the various unknown values of X, a plausible answer is sufficiency, i.e., invariance of those conditional probabilities as your probability distribution over the values of X varies. A different answer, in terms of conditional exchangeability, is offered for use (...) when such definite conditional probabilities are absent. (shrink)
The approach to decision theory floated in my 1965 book is reviewed (I), challenged in various related ways (II–V) and defended, firstad hoc (II–IV) and then by a general argument of Ellery Ells's (VI). Finally, causal decision theory (in a version sketched in VII) is exhibited as a special case of my 1965 theory, according to the Eellsian argument.
This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.
Jonathan Weisberg has argued that Jeffrey Conditioning is inherently “anti-holistic” By this he means, inter alia, that JC does not allow us to take proper account of after-the-fact defeaters for our beliefs. His central example concerns the discovery that the lighting in a room is red-tinted and the relationship of that discovery to the belief that a jelly bean in the room is red. Weisberg’s argument that the rigidity required for JC blocks the defeating role of the red-tinted light (...) rests on the strong assumption that all posteriors within the distribution in this example are rigid on a partition over the proposition that the jelly bean is actually red. But individual JC updates of propositions do not require such a broad rigidity assumption. Jeffrey conditionalizers should consider the advantages of a modest project of targeted updating focused on particular propositions rather than seeking to update the entire distribution using one obvious partition. Although Weisberg’s example fails to show JC to be irrelevant or useless, other problems he raises for JC (the commutativity and inputs problems) remain and actually become more pressing when we recognize the important role of background information. (shrink)
Bayesian decision theory can be viewed as the core of psychological theory for idealized agents. To get a complete psychological theory for such agents, you have to supplement it with input and output laws. On a Bayesian theory that employs strict conditionalization, the input laws are easy to give. On a Bayesian theory that employs Jeffrey conditionalization, there appears to be a considerable problem with giving the input laws. However, Jeffrey conditionalization can be reformulated so that the problem (...) disappears, and in fact the reformulated version is more natural and easier to work with on independent grounds. (shrink)
Richard Jeffrey's generalization of Bayes' rule of conditioning follows, within the theory of belief functions, from Dempster's rule of combination and the rule of minimal extension. Both Jeffrey's rule and the theory of belief functions can and should be construed constructively, rather than normatively or descriptively. The theory of belief functions gives a more thorough analysis of how beliefs might be constructed than Jeffrey's rule does. The inadequacy of Bayesian conditioning is much more general than Jeffrey's (...) examples of uncertain perception might suggest. The ``parameter α '' that Hartry Field has introduced into Jeffrey's rule corresponds to the "weight of evidence" of the theory of belief functions. (shrink)
Since the beginning of the ?eighties of the present century, a circle of relatively young American sociologists who are followers of Jeffrey Alexander are making energetic and spectacular efforts to supply sociology with a uniform and comprehensive theoretical framework by continuing Talcott Parsons' lifework. The present article is an appreciation of Alexander's achievements in the justification of a general sociological theory (especially a theory of action and social order) while pointing to objections that can be raised against the character (...) of his theory. A scrutiny of Alexander's metatheoretical deliberations and of his interpretations of sociological classics such as Marx, Durkheim, Weber, and Parsons reveals that Alexander's metatheoretical frame is not flexible enough to actually reconstruct the problem situation of the classics. Pointers are given toward a theory of action that is not subject to the antinomy of utilitarianism and normativism, so that it is more adequate and appropriate to the heritage of the sociological classics, both from a theoretical and an interpretative angle. (shrink)
In Tsuji 1997 the concept of Jeffrey-Keynes algebras was introduced in order to construct a paraconsistent theory of decision under uncertainty. In the present paper we show that these algebras can be used to develop a theory of decision under uncertainty that measures the degree of belief on the quasi (or partial) truth of the propositions. As applications of this new theory of decision, we use it to analyze Popper's paradox of ideal evidence and to indicate a possible way (...) of formalizing Keynes' theory of economic action. (shrink)
This paper is partly a tribute to Richard Jeffrey, partly a reflection on some of his writings, The Logic of Decision in particular. I begin with a brief biography and some fond reminiscences of Dick. I turn to some of the key tenets of his version of Bayesianism. All of these tenets are deployed in my discussion of his response to the St. Petersburg paradox, a notorious problem for decision theory that involves a game of infinite expectation. Prompted by (...) that paradox, I conclude with some suggestions of avenues for future research. (shrink)
Abstract. Suppose that several individuals who have separately assessed prior probability distributions over a set of possible states of the world wish to pool their individual distributions into a single group distribution, while taking into account jointly perceived new evidence. They have the option of (i) first updating their individual priors and then pooling the resulting posteriors or (ii) first pooling their priors and then updating the resulting group prior. If the pooling method that they employ is such that they (...) arrive at the same final distribution in both cases, the method is said to be externally Bayesian, a property first studied by Madansky (1964). We show that a pooling method for discrete distributions is externally Bayesian if and only if it commutes with Jeffrey conditioning, parameterized in terms of certain ratios of new to old odds, as in Wagner (2002), rather than in terms of the posterior probabilities of members of the disjoint family of events on which such conditioning originates. (shrink)
Jeffrey Stout addresses two of the main criticisms of liberal democracy by its contemporary neotraditionalist Christian critics: that liberal democracy is destructive of social tradition, and thereby of virtue in the citizenry, and that liberal democracy is inherently secular, committed to expunging religious voices from the public arena. I judge that Stout effectively answers these charges: liberal democracy has its own tradition, it cultivates the virtues relevant to that, and it is not inherently hostile to piety. What Stout does (...) not do, I suggest, is take the next step of showing, positively, that Christianity can and should affirm the substance of liberal democratic society. This is due, in good measure, to the fact that Stout never tells us, except in off-hand comments, what he takes the substance of liberal democracy to be. And this, in turn, is due to his way of employing pragmatism: he uses pragmatism to give an account of human society generally, not of liberal democratic society. I raise some questions about the general account that pragmatism gives of human society, and thus about the account that it would give of liberal democracy. (shrink)
A glance at the sky raises my probability of rain to .7. As it happens, the conditional probabilities of each state given rain remain the same, and similarly for their conditional probabilities given no rain. As Jeffrey (1983, Ch. 11) points out, my new distribution P2 is therefore fixed by the law of total probability. For example, P2(RC) = P2(RC | R)P2(R)+P2(RC | ¯.
Jeffrey Tillman is perceptive in noticing that certain Protestant theologians have used evolutionary theory to become more sympathetic to Roman Catholic views of Christian love. But he is incorrect in saying that these formulations deemphasize a place for self-sacrifice in Christian love. Christian love defined as a strenuous equal-regard for both other and self also requires sacrificial efforts to restore love as equal-regard when finitude and sin undermine genuine mutuality and community.
To the Editor: It was with great interest that our Canadian Palliative Sedation Therapy Guideline working group read Jeffrey Berger's recent article ("Rethinking Guidelines for the Use of Palliative Sedation," May-June 2010). Given our own group's efforts to develop national guidelines, we have rethought the issue of palliative sedation therapy several times over the past year.The use of clear and concise definitions is fundamental to the development of any consensus guidelines on this topic. In the article, the term "palliative (...) sedation to unconsciousness," or PSU, implies the concerning assumption that sedation will knowingly be to unconsciousness in the palliative case under consideration. This conflicts with .. (shrink)
Jeffrey (1983) proposed a generalization of conditioning as a means of updating probability distributions when new evidence drives no event to certainty. His rule requires the stability of certain conditional probabilities through time. We tested this assumption (“invariance”) from the psychological point of view. In Experiment 1 participants offered probability estimates for events in Jeffrey’s candlelight example. Two further scenarios were investigated in Experiment 2, one in which invariance seems justified, the other in which it does not. Results (...) were in rough conformity to Jeffrey (1983)’s principle. (shrink)
A simple rule of probability revision ensures that the final result of a sequence of probability revisions is undisturbed by an alteration in the temporal order of the learning prompting those revisions. This Uniformity Rule dictates that identical learning be reflected in identical ratios of certain new-to-old odds, and is grounded in the old Bayesian idea that such ratios represent what is learned from new experience alone, with prior probabilities factored out. The main theorem of this paper includes as special (...) cases (i) Field's theorem on commuting probability-kinematical revisions and (ii) the equivalence of two strategies for generalizing Jeffrey's solution to the old evidence problem to the case of uncertain old evidence and probabilistic new explanation. (shrink)
I show that David Lewis’s principal principle is not preserved under Jeffrey conditionalization. Using this observation, I argue that Lewis’s reason for rejecting the desire as belief thesis and Adams’s thesis applies also to his own principal principle. 1 Introduction2 Adams’s Thesis, the Desire as Belief Thesis, and the Principal Principle3 Jeffrey Conditionalization4 The Principal Principles Not Preserved under Jeffrey Conditionalization5 Inadmissible Experiences.
Richard Jeffrey and Michael Goldstein have both introduced systematic approaches to the structure of opinion changes. For both approaches there are theorems which indicate great generality and width of scope. The main questions addressed here will be to what extent the basic forms of representation are intertranslatable, and how we can conceive of such programs in general.
Warren Samuels maintains that every society has a constant amount of coercion and order, which vary only in terms of who gains and who loses, because every society has a government that establishes property rights. In making these arguments, Samuels exaggerates the extent to which governmental decisions predetermine the workings of a market society, and he fails to recognize that, with regard to the attainment of specific socioeconomic outcomes, governmental stipulation of private property rights differs fundamentally (...) from governmental command and control. (shrink)
Many strands are woven into the ideas and work of Jeffrey Gray. From a background of classical languages and a spell in military intelligence spent honing skills in languages and typing, he took two BA degrees (in modern languages and psychology) at Oxford University. He then trained as a clinical psychologist at the Institute of Psychiatry (IOP), London, capping this with a PhD on the sources of emotional behaviour.
Suppose n Bayesian agents need to make a decision as a group. The groupas a whole is also supposed to be a Bayesian agent whose probabilities andutilities are derived or aggregated in reasonable ways from the probabilitiesand utilities of the group members. The aggregation could beex ante, i.e., interms of expected utilities, or it could be ex post, i.e., in terms of utilitiesonly, or in terms of utilities and probabilities separately. This study exploresthe ex post approach. Using the Bolker/Jeffrey (...) framework, we show thatex post aggregation is subject to an instability phenomenon. That is, it mayhappen that the group preference between actions ``flips back and forth'''' dependingon the level of detail in which the decision problem is described. Structurally verysimilar phenomena also occur elsewhere in social choice theory, in statistics (Simpson''sParadox), and in voting theory (Ostrogorski''s Paradox). (shrink)
In this commentary, after first summarizing the three major theses of Jeffrey's paper Probability and Falsification: Critique of the Popper Program, and sketching out what I take to be his central argument, I criticize Jeffrey on two grounds. The first is that he has failed to explain why his version of Bayesianism provides us with better theories upon which to make decisions; the second is that he has offered a theory about decision-making that by-passes the important question: How (...) can we make more rational decisions? (shrink)
(2013). Review of Jeffrey P. Spike, Thomas R. Cole, Richard Buday, Freeman Williams, and Mary Ann Pendino, The Brewsters. The American Journal of Bioethics: Vol. 13, No. 3, pp. 52-54. doi: 10.1080/15265161.2013.760988.
In this paper, I argue for a view largely favorable to the Thirder view: when Sleeping Beauty wakes up on Monday, her credence in the coin’s landing heads is less than 1/2. Let’s call this “the Lesser view.” For my argument, I (i) criticize Strict Conditionalization as the rule for changing de se credences; (ii) develop a new rule; and (iii) defend it by Gaifman’s Expert Principle. Finally, I defend the Lesser view by making use of this new rule.
Since Francis Crick popularized the term `Neural Correlate of Consciousness' (NCC), it has been the focus of what is perhaps the most exciting research area in the cognitive sciences. Different researchers and laboratories have offered different brain structures as candidates for the NCC prize. Different chunks of gray matter have been identified as the potential seat of consciousness. Some researchers attempt to identify the NCC via a characterization of the cognitive aspects of consciousness, such as its functional significance or intentional (...) directedness, while others attempt a direct identification of the NCC, without any cognitive intermediary. Needless to say, no consensus is in sight on any of this. (shrink)
Jonathan Weisberg claims that certain probability assessments constructed by Jeffrey conditioning resist subsequent revision by a certain type of after-the-fact defeater of the reasons supporting those assessments, and that such conditioning is thus “inherently anti-holistic.” His analysis founders, however, in applying Jeffrey conditioning to a partition for which an essential rigidity condition clearly fails. Applied to an appropriate partition, Jeffrey conditioning is amenable to revision by the sort of after-the-fact defeaters considered by Weisberg in precisely the way (...) that he demands. (shrink)