In this volume, John Horty brings to bear his work in logic to present a framework that allows for answers to key questions about reasons and reasoning, namely: What are reasons, and how do they support actions or conclusions?
John Horty effectively develops deontic logic (the logic of ethical concepts like obligation and permission) against the background of a formal theory of agency. He incorporates certain elements of decision theory to set out a new deontic account of what agents ought to do under various conditions over extended periods of time. Offering a conceptual rather than technical emphasis, Horty's framework allows a number of recent issues from moral theory to be set out clearly and discussed from a uniform point (...) of view. (shrink)
The goal of this paper is to frame a theory of reasons--what they are, how they support actions or conclusions--using the tools of default logic. After sketching the basic account of reasons as provided by defaults, I show how it can be elaborated to deal with two more complicated issues: first, situations in which the priority relation among defaults, and so reasons as well, is itself established through default reasoning; second, the treatment of undercutting defeat and exclusionary reasons. Finally, and (...) by way of application, I show how the resulting account can shed some light on Jonathan Dancy's argument from reason holism to a form of extreme particularism in moral theory. (shrink)
This paper shows how two models of precedential constraint can be broadened to include legal information represented through dimensions. I begin by describing a standard representation of legal cases based on boolean factors alone, and then reviewing two models of constraint developed within this standard setting. The first is the “result model”, supporting only a fortiori reasoning. The second is the “reason model”, supporting a richer notion of constraint, since it allows the reasons behind a court’s decisions to be taken (...) into account. I then show how the initial representation can be modified to incorporate dimensional information and how the result and reason models can be adapted to this new dimensional setting. As it turns out, these two models of constraint, which are distinct in the standard setting, coincide once they are transposed to the new dimensional setting, yielding exactly the same patterns of constraint. I therefore explore two ways of refining the reason model of constraint so that, even in the dimensional setting, it can still be separated from the result model. (shrink)
Let us say that a normative conﬂict is a situation in which an agent ought to perform an action A, and also ought to perform an action B, but in which it is impossible for the agent to perform both A and B. Not all normative conﬂicts are moral conﬂicts, of course. It may be that the agent ought to perform the action A for reasons of personal generosity, but ought to perform the action B for reasons of prudence: perhaps (...) A involves buying a lavish gift for a friend, while B involves depositing a certain amount of money in the bank. In general, our practical deliberation is shaped by a concern with a variety of morally neutral goods—not just generosity and prudence, but any number of others, such as etiquette, aesthetics, fun—many of which are capable of providing conﬂicting reasons for action. I mention these ancillary values in the present setting, however, only to put them aside. We will be concerned here, not with normative conﬂicts more generally, but precisely with moral conﬂicts—situations in which, even when our attention is restricted entirely to moral reasons for action, it is nevertheless true that an agent ought to do A and ought to do B, where it is impossible to do both. (shrink)
From a philosophical standpoint, the work presented here is based on van Fraassen . The bulk of that paper is organized around a series of arguments against the assumption, built into standard deontic logic, that moral dilemmas are impossible; and van Fraassen only briefly sketches his alternative approach. His paper ends with the conclusion that “the problem of possibly irresolvable moral conflict reveals serious flaws in the philosophical and semantic foundations of ‘orthodox’ deontic logic, but also suggests a rich set (...) of new problems and methods for such logic.” My goal has been to suggest that some of these methods might be found in current research on nonmonotonic reasoning, and that some of the problems may have been confronted there as well.I have shown that nonmonotonic logics provide a natural framework for reasoning about moral dilemmas, perhaps even more useful than the ordinary modal framework, and that the issues surrounding the treatment of exceptional information within these logics run parallel to some of the problems posed by conditional oughts. However, there is also another way in which deontic logic might benefit from a connection to nonmonotonic reasoning. A familiar criticism among ethicists of work in deontic logic is that it is too abstract, and too far removed from the kind of problems confronted by real agents in moral deliberation. It must be said that similar criticisms of abstraction and irrelevance are often lodged against work in nonmonotonic reasoning by more practically minded researchers in artificial intelligence; but here, at least, the criticisms are taken seriously. Nonmonotonic logic aims at a qualitative account of commonsense reasoning, which can be used to relate planning and action to defeasible goals and beliefs; and at least some of the theories developed in this area have been tested in realistic situations. By linking the subject of deontic logic to this research, it may be possible also to relate the idealized study of moral reasoning typical of the field to a more robust treatment of practical deliberation. (shrink)
This paper describes one way in which a precise reason model of precedent could be developed, based on the general idea that courts are constrained to reach a decision that is consistent with the assessment of the balance of reasons made in relevant earlier decisions. The account provided here has the additional advantage of showing how this reason model can be reconciled with the traditional idea that precedential constraint involves rules, as long as these rules are taken to be defeasible. (...) The account presented is firmly based on a body of work that has emerged in AI and Law. This work is discussed, and there is a particular discussion of approaches based on theory construction, and how that work relates to the model described in this paper. (shrink)
The purpose of this paper is to question some commonly accepted patterns of reasoning involving nonmonotonic logics that generate multiple extensions. In particular, I argue that the phenomenon of floating conclusions indicates a problem with the view that the skeptical consequences of such theories should be identified with the statements that are supported by each of their various extensions.
The purpose of this paper is to explore a new deontic operator for representing what an agent ought to do; the operator is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perloff, which itself relies on Arthur Prior's indeterministic tense logic. The analysis developed here of what an agent ought to do is based on a dominance ordering adapted from the decision theoretic study of choice under uncertainty to the present account of (...) action. It is shown that this analysis gives rise to a normal deontic operator, and that the result is superior to an analysis that identifies what an agent ought to do with what it ought to be that the agent does. (shrink)
The result model of precedent holds that a legal precedent controls a fortiori cases—those cases, that is, that are at least as strong for the winning side of the precedent as the precedent case itself. This paper defends the result model against some objections by Larry Alexander, drawing on ideas from the field of Artificial Intelligence and Law in order to define an appropriate strength ordering for cases.
Early attempts at combining multiple inheritance with nonmonotonic reasoning were based on straightforward extensions of tree-structured inheritance systems, and were theoretically unsound. In The Mathcmat~'cs of Inheritance Systcrns, or TMOIS, Touretzky described two problems these systems cannot handle: reasoning in the presence of true but redundant assertions, and coping with ambiguity. TMOIS provided a definition and analysis of a theoretically sound multiple inheritance system, accom-.
In previous work, I showed how the “reason model” of precedential constraint could naturally be generalized from the standard setting in which it was first developed to a richer setting in which dimensional information is represented as well. Surprisingly, it then turned out that, in this new dimensional setting, the reason model of constraint collapsed into the “result model,” which supports only a fortiori reasoning. The purpose of this note is to suggest a modification of the reason model of constraint (...) that distinguishes it from the result model even in the dimensional setting. (shrink)
This paper describes one way in which a precise reason model of precedent could be developed, based on Grant Lamond’s general idea that a later court is constrained to reach a decision that is consistent an earlier court’s assessment of the balance of reasons. The account provided here has the additional advantage of showing how this reason model can be reconciled with the traditional idea that precedential constraint involves rules, as long as these rules are taken to be defeasible.
In this chapter, we begin by sketching in the broadest possible strokes the ideas behind two formal systems that have been introduced with to goal of explicating the ways in which reasons interact to support the actions and conclusions they do. The first of these is the theory of defeasible reasoning developed in the seminal work of Pollock; the second is a more recent theory due to Horty, which adapts and develops the default logic introduced by Reiter to provide an (...) account of reasons. However, the implementations are complex enough, in both cases, to prevent anything more than this sketch. And we would not want to give the impression that we think that work on the logic of reasons must follow the path mapped out in either of these theories—indeed, we feel that the field is wide open. In the remainder of the chapter, therefore, will concentrate on a number of issues bearing on the logic of reasons that are either not treated in the work of Pollock and Horty, or whose treatment there is, we feel, either inadequate or incomplete. These are: first, the question of whether it is necessary to understand logical interactions among reasons themselves, rather than simply between reasons and the actions or conclusions they support, and if so, what principles might govern these interactions; second, priority relations among reasons and the notion of reason accrual; and third, some problems posed by undercutting defeat. (shrink)
This paper contributes to the foundations of a theory of rational choice for artiﬁcial agents in dynamic environments. Our work is developed within a theoretical framework, originally due to Bratman, that models resource-bounded agents as operating against the background of some current set of intentions, which helps to frame their subsequent reasoning. In contrast to the standard theory of rational choice, where options are evaluated in isolation, we therefore provide an analysis of situations in which the options presented to an (...) agent are evaluated against a background context provided by the agent’s current plans—commitments to future activities, which may themselves be only partially speciﬁed. The interactions between the new options and the background context can complicate the task of evaluating the option, rendering it either more or less desirable in context than it would have been in isolation. 2001 Elsevier Science B.V. All rights reserved. (shrink)
I begin by reviewing classical semantics and the problems presented by normative conflicts. After a brief detour through default logic, I establish some connections between the treatment of conflicts in each of these two approaches, classical and default, and then move on to consider some further issues: priorities among norms, or reasons, conditional oughts, and reasons about reasons.
This paper points out some problems with two recent logical systems – one due to Prakken and Sartor, the other due to Kowalski and Toni – designedfor the representation of defeasible arguments in general, but with a specialemphasis on legal reasoning.
The purpose of this paper is to eÂ»tahlish some connections between precedent-based reasoning as it is studied in the field of Artificial Intelligence and Law, particularly in the work of Ashley, and two other fields: deontic logic and nonmonotonic logic. First, a deontic logic is described that allows lor sensible reasoning in the presence of conflicting norms. Second, a simplified version of Ashley's account of precedent-based reasoning is reformulated within the framework of this deontic logic. Finally, some ideas from the (...) theory of nonmonotonic inheritance are employed to show how Ashley's account might be elaborated to allow for a richer representation of the process of argumentation. (shrink)
This paper contributes to our formal understanding of the common law — especially the nature of the reasoning involved, but also its point, or justification, in terms of social coordination. I present two apparently distinct models of constraint by precedent in the common law, establish their equivalence, and argue for a perspective according to which courts are best thought of, not as creating and modifying rules, but as generating a social priority ordering on reasons through a procedure that is piecemeal, (...) distributed, and responsive to particular circumstances. (shrink)
John Pollock (1940?2009) was an influential American philosopher who made important contributions to various fields, including epistemology and cognitive science. In the last 25 years of his life, he also contributed to the computational study of defeasible reasoning and practical cognition in artificial intelligence. He developed one of the first formal systems for argumentation-based inference and he put many issues on the research agenda that are still relevant for the argumentation community today. This paper presents an appreciation of Pollock's work (...) on defeasible reasoning and its relevance for the computational study of argument. In our opinion, Pollock deserves to be remembered as one of the founding fathers of the field of computational argument, while, moreover, his work contains important lessons for current research in this field, reminding us of the richness of its object of study. (shrink)
This paper works within a particular framework for reasoning about actions—sometimes known as the framework of “stit semantics”—originally due to Belnap and Perloff, based ultimately on the theory of indeterminism set out in Prior’s indeterministic tense logic, and developed in full detail by Belnap, Perloff, and Xu . The issues I want to consider arise when certain normative, or decision theoretic, notions are introduced into this framework: here I will focus on the notion of a right action, and so on (...) the formulation of act utilitarianism within this indeterministic setting. The problem is simply that there are two different, and conflicting, ways of defining this notion, both well-motivated, and both carrying intuitive weight. (shrink)
In this short monograph, John Horty explores the difficulties presented for Gottlob Frege's semantic theory, as well as its modern descendents, by the treatment of defined expressions. The book begins by focusing on the psychological constraints governing Frege's notion of sense, or meaning, and argues that, given these constraints, even the treatment of simple stipulative definitions led Frege to important difficulties. Horty is able to suggest ways out of these difficulties that are both philosophically and logically plausible and Fregean in (...) spirit. This discussion is then connected to a number of more familiar topics, such as indexicality and the discussion of concepts in recent theories of mind and language. In the latter part of the book, after introducing a simple semantic model of senses as procedures, Horty considers the problems that definitions present for Frege's idea that the sense of an expression should mirror its grammatical structure. The requirement can be satisfied, he argues, only if defined expressions--and incomplete expressions as well--are assigned senses of their own, rather than treated contextually. He then explores one way in which these senses might be reified within the procedural model, drawing on ideas from work in the semantics of computer programming languages. With its combination of technical semantics and history of philosophy, Horty's book tackles some of the hardest questions in the philosophy of language. It should interest philosophers, logicians, and linguists. (shrink)
The purpose of this paper is to explore a new deontic operator for representing what an agent ought to do; the operator is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perlo, which itself relies on Arthur Prior's indeterministic tense logic. The analysis developed here of what an agent ought to do is based on a dominance ordering adapted from the decision theoretic study of choice under uncertainty to the present account of (...) action. It is shown that this analysis gives rise to a normal deontic operator, and that the result is superior to an analysis that identi es what an agent ought to do with what it ought to be that the agent does. (shrink)
This book provides a unified account of Hansson’s work on values (or preferences), norms, and their interrelations. Although much of the detailed material contained here appears among the numerous articles published by the author over the past decade or so, the book presents this work as a coherent whole. The overall style is formal: definitions are set out, results are established. Readers who do not enjoy formal work in value theory are likely to find little of interest here. But readers (...) who do appreciate this kind of work would do well to become familiar with Hansson’s results, and with his general perspective. (shrink)