This paper works within a particular framework for reasoning about actions—sometimes known as the framework of “stit semantics”—originally due to Belnap and Perloff, based ultimately on the theory of indeterminism set out in Prior’s indeterministic tense logic, and developed in full detail by Belnap, Perloff, and Xu . The issues I want to consider arise when certain normative, or decision theoretic, notions are introduced into this framework: here I will focus on the notion of a right action, and so on (...) the formulation of act utilitarianism within this indeterministic setting. The problem is simply that there are two different, and conflicting, ways of defining this notion, both well-motivated, and both carrying intuitive weight. (shrink)
This book provides a unified account of Hansson’s work on values (or preferences), norms, and their interrelations. Although much of the detailed material contained here appears among the numerous articles published by the author over the past decade or so, the book presents this work as a coherent whole. The overall style is formal: definitions are set out, results are established. Readers who do not enjoy formal work in value theory are likely to find little of interest here. But readers (...) who do appreciate this kind of work would do well to become familiar with Hansson’s results, and with his general perspective. (shrink)
The purpose of this paper is to eÂ»tahlish some connections between precedent-based reasoning as it is studied in the field of Artificial Intelligence and Law, particularly in the work of Ashley, and two other fields: deontic logic and nonmonotonic logic. First, a deontic logic is described that allows lor sensible reasoning in the presence of conflicting norms. Second, a simplified version of Ashley's account of precedent-based reasoning is reformulated within the framework of this deontic logic. Finally, some ideas from the (...) theory of nonmonotonic inheritance are employed to show how Ashley's account might be elaborated to allow for a richer representation of the process of argumentation. (shrink)
The purpose of this paper is to explore a new deontic operator for representing what an agent ought to do; the operator is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perlo , which itself relies on Arthur Prior's indeterministic tense logic. The analysis developed here of what an agent ought to do is based on a dominance ordering adapted from the decision theoretic study of choice under uncertainty to the present (...) account of action. It is shown that this analysis gives rise to a normal deontic operator, and that the result is superior to an analysis that identi es what an agent ought to do with what it ought to be that the agent does. (shrink)
Early attempts at combining multiple inheritance with nonmonotonic reasoning were based on straightforward extensions of tree-structured inheritance systems, and were theoretically unsound. In The Mathcmat~'cs of Inheritance Systcrns, or TMOIS, Touretzky described two problems these systems cannot handle: reasoning in the presence of true but redundant assertions, and coping with ambiguity. TMOIS provided a definition and analysis of a theoretically sound multiple inheritance system, accom-.
Working with logical techniques pushes the requirement of rigour so high that pressures of complexity enforce a very narrow focus. . . Nontechnical philosophers are naturalists who describe what they see with the naked eye. Logicians examine nature through their microscopes or X-ray cameras: what they see is also an aspect of nature, but a different one.
This paper describes one way in which a precise reason model of precedent could be developed, based on Grant Lamond’s general idea that a later court is constrained to reach a decision that is consistent an earlier court’s assessment of the balance of reasons. The account provided here has the additional advantage of showing how this reason model can be reconciled with the traditional idea that precedential constraint involves rules, as long as these rules are taken to be defeasible.
The doctrine of precedent, as it has evolved within the common law, is constituted by a system of conventions through which the decisions of earlier courts in particular cases somehow generalize to constrain the decisions of later courts facing different cases, while still allowing these later courts a degree of freedom in responding to fresh circumstances. Although the techniques for reasoning with precedents are taught early on in law schools, mastered with relative ease, and applied on a daily basis by (...) legal practitioners, it has proved to be considerably more difficult to arrive at a theoretical understanding of the doctrine itself, a clear articulation of the underlying system of conventions. (shrink)
The purpose of this paper is to question some commonly accepted patterns of reasoning involving nonmonotonic logics that generate multiple extensions. In particular, I argue that the phenomenon of floating conclusions indicates a problem with the view that the skeptical consequences of such theories should be identified with the statements that are supported by each of their various extensions.
This paper describes one way in which a precise reason model of precedent could be developed, based on the general idea that courts are constrained to reach a decision that is consistent with the assessment of the balance of reasons made in relevant earlier decisions. The account provided here has the additional advantage of showing how this reason model can be reconciled with the traditional idea that precedential constraint involves rules, as long as these rules are taken to be defeasible. (...) The account presented is firmly based on a body of work that has emerged in AI and Law. This work is discussed, and there is a particular discussion of approaches based on theory construction, and how that work relates to the model described in this paper. (shrink)
John Pollock (1940?2009) was an influential American philosopher who made important contributions to various fields, including epistemology and cognitive science. In the last 25 years of his life, he also contributed to the computational study of defeasible reasoning and practical cognition in artificial intelligence. He developed one of the first formal systems for argumentation-based inference and he put many issues on the research agenda that are still relevant for the argumentation community today. This paper presents an appreciation of Pollock's work (...) on defeasible reasoning and its relevance for the computational study of argument. In our opinion, Pollock deserves to be remembered as one of the founding fathers of the field of computational argument, while, moreover, his work contains important lessons for current research in this field, reminding us of the richness of its object of study. (shrink)
The goal of this paper is to frame a theory of reasons--what they are, how they support actions or conclusions--using the tools of default logic. After sketching the basic account of reasons as provided by defaults, I show how it can be elaborated to deal with two more complicated issues: first, situations in which the priority relation among defaults, and so reasons as well, is itself established through default reasoning; second, the treatment of undercutting defeat and exclusionary reasons. Finally, and (...) by way of application, I show how the resulting account can shed some light on Jonathan Dancy's argument from reason holism to a form of extreme particularism in moral theory. (shrink)
The result model of precedent holds that a legal precedent controls a fortiori cases—those cases, that is, that are at least as strong for the winning side of the precedent as the precedent case itself. This paper defends the result model against some objections by Larry Alexander, drawing on ideas from the field of Artificial Intelligence and Law in order to define an appropriate strength ordering for cases.
Let us say that a normative conﬂict is a situation in which an agent ought to perform an action A, and also ought to perform an action B, but in which it is impossible for the agent to perform both A and B. Not all normative conﬂicts are moral conﬂicts, of course. It may be that the agent ought to perform the action A for reasons of personal generosity, but ought to perform the action B for reasons of prudence: perhaps (...) A involves buying a lavish gift for a friend, while B involves depositing a certain amount of money in the bank. In general, our practical deliberation is shaped by a concern with a variety of morally neutral goods—not just generosity and prudence, but any number of others, such as etiquette, aesthetics, fun—many of which are capable of providing conﬂicting reasons for action. I mention these ancillary values in the present setting, however, only to put them aside. We will be concerned here, not with normative conﬂicts more generally, but precisely with moral conﬂicts—situations in which, even when our attention is restricted entirely to moral reasons for action, it is nevertheless true that an agent ought to do A and ought to do B, where it is impossible to do both. (shrink)
This paper points out some problems with two recent logical systems – one due to Prakken and Sartor, the other due to Kowalski and Toni – designedfor the representation of defeasible arguments in general, but with a specialemphasis on legal reasoning.
John Horty effectively develops deontic logic (the logic of ethical concepts like obligation and permission) against the background of a formal theory of agency. He incorporates certain elements of decision theory to set out a new deontic account of what agents ought to do under various conditions over extended periods of time. Offering a conceptual rather than technical emphasis, Horty's framework allows a number of recent issues from moral theory to be set out clearly and discussed from a uniform point (...) of view. (shrink)
The purpose of this paper is to explore a new deontic operator for representing what an agent ought to do; the operator is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perloff, which itself relies on Arthur Prior's indeterministic tense logic. The analysis developed here of what an agent ought to do is based on a dominance ordering adapted from the decision theoretic study of choice under uncertainty to the present account of (...) action. It is shown that this analysis gives rise to a normal deontic operator, and that the result is superior to an analysis that identifies what an agent ought to do with what it ought to be that the agent does. (shrink)
From a philosophical standpoint, the work presented here is based on van Fraassen . The bulk of that paper is organized around a series of arguments against the assumption, built into standard deontic logic, that moral dilemmas are impossible; and van Fraassen only briefly sketches his alternative approach. His paper ends with the conclusion that “the problem of possibly irresolvable moral conflict reveals serious flaws in the philosophical and semantic foundations of ‘orthodox’ deontic logic, but also suggests a rich set (...) of new problems and methods for such logic.” My goal has been to suggest that some of these methods might be found in current research on nonmonotonic reasoning, and that some of the problems may have been confronted there as well.I have shown that nonmonotonic logics provide a natural framework for reasoning about moral dilemmas, perhaps even more useful than the ordinary modal framework, and that the issues surrounding the treatment of exceptional information within these logics run parallel to some of the problems posed by conditional oughts. However, there is also another way in which deontic logic might benefit from a connection to nonmonotonic reasoning. A familiar criticism among ethicists of work in deontic logic is that it is too abstract, and too far removed from the kind of problems confronted by real agents in moral deliberation. It must be said that similar criticisms of abstraction and irrelevance are often lodged against work in nonmonotonic reasoning by more practically minded researchers in artificial intelligence; but here, at least, the criticisms are taken seriously. Nonmonotonic logic aims at a qualitative account of commonsense reasoning, which can be used to relate planning and action to defeasible goals and beliefs; and at least some of the theories developed in this area have been tested in realistic situations. By linking the subject of deontic logic to this research, it may be possible also to relate the idealized study of moral reasoning typical of the field to a more robust treatment of practical deliberation. (shrink)