In this paper we describe an approach to practical reasoning, reasoning about what it is best for a particular agent to do in a given situation, based on presumptive justifications of action through the instantiation of an argument scheme, which is then subject to examination through a series of critical questions. We identify three particular aspects of practical reasoning which distinguish it from theoretical reasoning. We next provide an argument scheme and an associated set of critical questions which is able (...) to capture these features. In order that both the argument scheme and the critical questions can be given precise interpretations we use the semantic structure of an Action-Based Alternating Transition System as the basis for their definition. We then work through a detailed example to show how this approach to practical reasoning can be applied to a problem solving situation, and briefly describe some other previous applications of the general approach. In a second example we relate our account to the social laws paradigm for co-ordinating multi-agent systems. The contribution of the paper is to provide firm foundations for an approach to practical reasoning based on presumptive argument in terms of a well-known model for representing the effects of actions of a group of agents. (shrink)
Reasoning with cases has been a primary focus of those working in AI and law who have attempted to model legal reasoning. In this paper we put forward a formal model of reasoning with cases which captures many of the insights from that previous work. We begin by stating our view of reasoning with cases as a process of constructing, evaluating and applying a theory. Central to our model is a view of the relationship between cases, rules based on cases, (...) and the social values which justify those rules. Having given our view of these relationships, we present our formal model of them, and explain how theories can be constructed, compared and evaluated. We then show how previous work can be described in terms of our model, and discuss extensions to the basic model to accommodate particular features of previous work. We conclude by identifying some directions for future work. (shrink)
In this paper we describe the impact that Walton’s conception of argumentation schemes had on AI and Law research. We will discuss developments in argumentation in AI and Law before Walton’s schemes became known in that community, and the issues that were current in that work. We will then show how Walton’s schemes provided a means of addressing all of those issues, and so supplied a unifying perspective from which to view argumentation in AI and Law.
This paper presents a methodology to design and implement programs intended to decide cases, described as sets of factors, according to a theory of a particular domain based on a set of precedent cases relating to that domain. We useDialectical Frameworks, a recent development in AI knowledge representation, as the central feature of our design method. ADFs will play a role akin to that played by Entity–Relationship models in the design of database systems. First, we explain how the factor hierarchy (...) of the well-known legal reasoning system CATO can be used to instantiate an ADF for the domain of US Trade Secrets. This is intended to demonstrate the suitability of ADFs for expressing the design of legal cased based systems. The method is then applied to two other legal domains often used in the literature of AI and Law. In each domain, the design is provided by the domain analyst expressing the cases in terms of factors organised into an ADF from which an executable program can be implemented in a straightforward way by taking advantage of the closeness of the acceptance conditions of the ADF to components of an executable program. We evaluate the ease of implementation, the performance and efficacy of the resulting program, ease of refinement of the program and the transparency of the reasoning. This evaluation suggests ways in which factor based systems, which are limited by taking as their starting point the representation of cases as sets of factors and so abstracting away the particular facts, can be extended to address open issues in AI and Law by incorporating the case facts to improve the decision, and by considering justification and reasoning using portion of precedents. (shrink)
In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline (...) an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system. (shrink)
This paper describes one way in which a precise reason model of precedent could be developed, based on the general idea that courts are constrained to reach a decision that is consistent with the assessment of the balance of reasons made in relevant earlier decisions. The account provided here has the additional advantage of showing how this reason model can be reconciled with the traditional idea that precedential constraint involves rules, as long as these rules are taken to be defeasible. (...) The account presented is firmly based on a body of work that has emerged in AI and Law. This work is discussed, and there is a particular discussion of approaches based on theory construction, and how that work relates to the model described in this paper. (shrink)
This paper studies the use of hypothetical and value-based reasoning in US Supreme-Court cases concerning the United States Fourth Amendment. Drawing upon formal AI & Law models of legal argument a semi-formal reconstruction is given of parts of the Carney case, which has been studied previously in AI & law research on case-based reasoning. As part of the reconstruction, a semi-formal proposal is made for extending the formal AI & Law models with forms of metalevel reasoning in several argument schemes. (...) The result is compared with Rissland’s (1989) analysis in terms of dimensions and Ashley’s (2008) analysis in terms of his process model of legal argument with hypotheticals. (shrink)
We provide a retrospective of 25 years of the International Conference on AI and Law, which was first held in 1987. Fifty papers have been selected from the thirteen conferences and each of them is described in a short subsection individually written by one of the 24 authors. These subsections attempt to place the paper discussed in the context of the development of AI and Law, while often offering some personal reactions and reflections. As a whole, the subsections build into (...) a history of the last quarter century of the field, and provide some insights into where it has come from, where it is now, and where it might go. (shrink)
Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work.
There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer. Current approaches to norms in multi-agent systems tend either to simply make prohibited actions unavailable, or to provide a set of rules which the agent is obliged to follow, either as part of its design or to avoid sanctions and punishments. In this paper we argue (...) for the position that agents should be equipped with the ability to reason about a system’s norms, by reasoning about the social and moral values that norms are designed to serve; that is, perform the sort of moral reasoning we expect of humans. In particular we highlight the need for such reasoning when circumstances are such that the rules should arguably be broken, so that the reasoning can guide agents in deciding whether to comply with the norms and, if violation is desirable, how best to violate them. One approach to enabling this is to make use of an argumentation scheme based on values and designed for practical reasoning: arguments for and against actions are generated using this scheme and agents choose between actions based on their preferences over these values. Moral reasoning then requires that agents have an acceptable set of values and an acceptable ordering on their values. We first discuss how this approach can be used to think about and justify norms in general, and then discuss how this reasoning can be used to think about when norms should be violated, and the form this violation should take. We illustrate how value based reasoning can be used to decide when and how to violate a norm using a road traffic example. We also briefly consider what makes an ordering on values acceptable, and how such an ordering might be determined. (shrink)
The first issue of _Artificial Intelligence and Law_ journal was published in 1992. This paper provides commentaries on landmark papers from the first decade of that journal. The topics discussed include reasoning with cases, argumentation, normative reasoning, dialogue, representing legal knowledge and neural networks.
The first issue of Artificial Intelligence and Law journal was published in 1992. This paper offers some commentaries on papers drawn from the Journal’s third decade. They indicate a major shift within Artificial Intelligence, both generally and in AI and Law: away from symbolic techniques to those based on Machine Learning approaches, especially those based on Natural Language texts rather than feature sets. Eight papers are discussed: two concern the management and use of documents available on the World Wide Web, (...) and six apply machine learning techniques to a variety of legal applications. (shrink)
The first issue of Artificial Intelligence and Law journal was published in 1992. This paper provides commentaries on nine significant papers drawn from the Journal’s second decade. Four of the papers relate to reasoning with legal cases, introducing contextual considerations, predicting outcomes on the basis of natural language descriptions of the cases, comparing different ways of representing cases, and formalising precedential reasoning. One introduces a method of analysing arguments that was to become very widely used in AI and Law, namely (...) argumentation schemes. Two relate to ontologies for the representation of legal concepts and two take advantage of the increasing availability of legal corpora in this decade, to automate document summarisation and for the mining of arguments. (shrink)
In this paper we apply a general account of practical reasoning to arguing about legal cases. In particular, we provide a reconstruction of the reasoning of the majority and dissenting opinions for a particular well-known case from property law. This is done through the use of Belief-Desire-Intention (BDI) agents to replicate the contrasting views involved in the actual decision. This reconstruction suggests that the reasoning involved can be separated into three distinct levels: factual and normative levels and a level connecting (...) the two, with conclusions at one level forming premises at the next. We begin by summarising our general approach, which uses instantiations of an argumentation scheme to provide presumptive justifications for actions, and critical questions to identify arguments which attack these justifications. These arguments and attacks are organised into argumentation frameworks to identify the status of individual arguments. We then discuss the levels of reasoning that occur in this reconstruction and the properties and significance of each of these levels. We illustrate the different levels with short examples and also include a discussion of the role of precedents within these levels of reasoning. (shrink)
The first issue of _Artificial Intelligence and Law_ journal was published in 1992. This paper discusses several topics that relate more naturally to groups of papers than a single paper published in the journal: ontologies, reasoning about evidence, the various contributions of Douglas Walton, and the practical application of the techniques of AI and Law.
In this paper, we present a particular role for abductive reasoning in law by applying it in the context of an argumentation scheme for practical reasoning. We present a particular scheme, based on an established scheme for practical reasoning, that can be used to reason abductively about how an agent might have acted to reach a particular scenario, and the motivations for doing so. Plausibility here depends on a satisfactory explanation of why this particular agent followed these motivations in the (...) particular situation. The scheme is given a formal grounding in terms of action-based alternating transition systems and we illustrate the approach with a running legal example. (shrink)
In recent years several proposals to view reasoning with legal cases as theory construction have been advanced. The most detailed of these is that of Bench-Capon and Sartor, which uses facts, rules, values and preferences to build a theory designed to explain the decisions in a set of cases. In this paper we describe CATE (CAse Theory Editor), a tool intended to support the construction of theories as described by Bench-Capon and Sartor, and which produces executable code corresponding to a (...) theory. CATE has been used in a series of experiments intended to explore a number of issues relating to such theories, including how the theories should be constructed, how sets of values should be compared, and the representation of cases using structured values as opposed to factors. (shrink)
There is a growing interest in how people conceptualise the legal domain for the purpose of legal knowledge systems. In this paper we discuss four such conceptualisations (referred to as ontologies): McCarty's language for legal discourse, Stamper's norma formalism, Valente's functional ontology of law, and the ontology of Van Kralingen and Visser. We present criteria for a comparison of the ontologies and discuss the strengths and weaknesses of the ontologies in relation to these criteria. Moreover, we critically review the criteria.
The third of Berman and Hafner’s early nineties papers on reasoning with legal cases concerned temporal context, in particular the evolution of case law doctrine over time in response to new cases and against a changing background of social values and purposes. In this paper we consider the ways in which changes in case law doctrine can be accommodated in a recently proposed methodology for encapsulating case law theories, and relate these changes the sources of change identified by Berman and (...) Hafner. (shrink)
Governments and other groups interested in the views of citizens require the means to present justifications of proposed actions, and the means to solicit public opinion concerning these justifications. Although Internet technologies provide the means for such dialogues, system designers usually face a choice between allowing unstructured dialogues, through, for example, bulletin boards, or requiring citizens to acquire a knowledge of some argumentation schema or theory, as in, for example, ZENO. Both of these options present usability problems. In this paper, (...) we describe an implemented system called PARMENIDES which allows structured argument over a proposed course of action, without requiring knowledge of the underlying argumentation theory. (shrink)
We describe PADUA, a protocol designed to support two agents debating a classification by offering arguments based on association rules mined from individual datasets. We motivate the style of argumentation supported by PADUA, and describe the protocol. We discuss the strategies and tactics that can be employed by agents participating in a PADUA dialogue. PADUA is applied to a typical problem in the classification of routine claims for a hypothetical welfare benefit. We particularly address the problems that arise from the (...) extensive number of misclassified examples typically found in such domains, where the high error rate is a widely recognised problem. We give examples of the use of PADUA in this domain, and explore in particular the effect of intermediate predicates. We have also done a large scale evaluation designed to test the effectiveness of using PADUA to detect misclassified examples, and to provide a comparison with other classification systems. (shrink)
In this paper we describe AGATHA, a program designed to automate the process of theory construction in case based domains. Given a seed case and a number of precedent cases, the program uses a set of argument moves to generate a search space for a dialogue between the parties to the dispute. Each move is associated with a set of theory constructors, and thus each point in the space can be associated with a theory intended to explain the seed case (...) and the other cases in the domain. The space is large and so an heuristic search method is needed. This paper describes two methods based on A* and alpha/beta pruning and also a series of experiments designed to explore the appropriateness of different evaluation functions, the most useful precedents to use as seed cases and the quality of the resulting theories. (shrink)
In this paper I shall discuss the notion of argument, and the importanceof argument in AI and Law. I shall distinguish four areas where argument hasbeen applied: in modelling legal reasoning based on cases; in thepresentation and explanation of results from a rule based legal informationsystem; in the resolution of normative conflict and problems ofnon-monotonicity; and as a basis for dialogue games to support the modellingof the process of argument. The study of argument is held to offer prospectsof real progress (...) in the field of AI and law, and the purpose of this paperis to provide an overview of work, and the connection between the various strands. (shrink)
In this paper we discuss the application of a new machine learning approach – Argument Based Machine Learning – to the legal domain. An experiment using a dataset which has also been used in previous experiments with other learning techniques is described, and comparison with previous experiments made. We also tested this method for its robustness to noise in learning data. Argumentation based machine learning is particularly suited to the legal domain as it makes use of the justifications of decisions (...) which are available. Importantly, where a large number of decided cases are available, it provides a way of identifying which need to be considered. Using this technique, only decisions which will have an influence on the rules being learned are examined. (shrink)
Stories can be powerful argumentative vehicles, and they are often used to present arguments from analogy, most notably as parables, fables or allegories where the story invites the hearer to infer an important claim of the argument. Case Based Reasoning in Law has many similar features: the current case is compared to previously decided cases, and in case the similarity between the previous and current cases is deemed sufficient, a similar conclusion can be drawn for the current case. In this (...) article, we want to take a further step towards computationally modelling the connection between stories and argumentation in analogical reasoning. We show how story schemes can be used to investigate and determine story similarity, and how the point of a story – that is, the conclusion that the storyteller intends the hearer to draw – can be likened to the ratio decidendi in a legal case. Finally, we present some formal tools for modelling stories based on computational models of practical reasoning. (shrink)
Norms provide a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in which there is no central authority. One of the most influential formulations of norm emergence was proposed by Axelrod :1095–1111, 1986). This paper provides an empirical analysis of aspects of Axelrod’s approach, by exploring some of the key assumptions made in previous evaluations of the model. We explore the dynamics of norm emergence and the occurrence of norm collapse when applying the model over extended durations. It (...) is this phenomenon of norm collapse that can motivate the emergence of a central authority to enforce laws and so preserve the norms, rather than relying on individuals to punish defection. Our findings identify characteristics that significantly influence norm establishment using Axelrod’s formulation, but are likely to be of importance for norm establishment more generally. Moreover, Axelrod’s model suffers from significant limitations in assuming that private strategies of individuals are available to others, and that agents are omniscient in being aware of all norm violations and punishments. Because this is an unreasonable expectation, the approach does not lend itself to modelling real-world systems such as online networks or electronic markets. In response, the paper proposes alternatives to Axelrod’s model, by replacing the evolutionary approach, enabling agents to learn, and by restricting the metapunishment of agents to cases where the original defection is observed, in order to be able to apply the model to real-world domains. This work can also help explain the formation of a “social contract” to legitimate enforcement by a central authority. (shrink)
The design and analysis of norms is a somewhat neglected topic in AI and Law, but this is not so in other areas of Computer Science. In recent years powerful techniques to model and analyse norms have been developed in the Multi-Agent Systems community, driven both by the practical need to regulate electronic institutions and open agent systems, and by a theoretical interest in mechanism design and normative systems. Agent based techniques often rely heavily on enforcing norms using the software (...) to prevent violation, but I will also discuss the use of sanctions and rewards, and the conditions under which compliance by autonomous agents (including humans) can be expected or encouraged without sanctions or rewards. In the course of the paper a suggested framework for the exploration of these issues is developed. (shrink)
In this paper I argue that to explain and resolve some kinds of disagreement we need to go beyond what logic alone can provide. In particular, following Perelman, I argue that we need to consider how arguments are ascribed different strengths by different audiences, according to how accepting these arguments promotes values favoured by the audience to which they are addressed. I show how we can extend the standard framework for modelling argumentation systems to allow different audiences to be represented. (...) I also show how this formalism can explain how some disputes can be resolved while in others the parties can only agree to differ. I illustrate this by consideration of a legal example. Finally, I make some suggestions as to where these values come from, and how they can be used to explain differences across jurisdictions, and changes in views over time. (shrink)
Argumentation Frameworks provide a fruitful basis for exploring issues of defeasible reasoning. Their power largely derives from the abstract nature of the arguments within the framework, where arguments are atomic nodes in an undifferentiated relation of attack. This abstraction conceals different senses of argument, namely a single-step reason to a claim, a series of reasoning steps to a single claim, and reasoning steps for and against a claim. Concrete instantiations encounter difficulties and complexities as a result of conflating these senses. (...) To distinguish them, we provide an approach to instantiating AFs in which the nodes are restricted to literals and rules, encoding the underlying theory directly. Arguments in these senses emerge from this framework as distinctive structures of nodes and paths. As a consequence of the approach, we reduce the effort of computing argumentation extensions, which is in contrast to other approaches. Our framework retains the theoretical and computational bene.. (shrink)
There has been much talk of the need to build intermediate models of the expertise required preparatory to constructing a knowledge-based system in the legal domain. Such models offer advantages for verification, validation, maintenance and reuse. As yet, however, few such models have been reported at a useful level of detail. In this paper we describe a method for conceptualising legal domains as well as its application to a substantial fragment of the Dutch Unemployment Benefits Act (DUBA).We first discuss the (...) intermediate models (called expertise models), then present a three-stage method for their construction, drawing on the CommonKADS work in knowledge acquisition, conceptual models of statute law, and the KANT method of knowledge analysis. Subsequently, we describe how these techniques were applied to the DUBA, and provide detailed examples of the resulting model. Finally, conclusions on the framework and guidelines are given as well as means of recording and presenting the various design choices. (shrink)
Hypertext and knowledge based systems can be viewed as complementary technologies, which if combined into a composite system may be able to yield a whole which is greater than the sum of the parts. To gain the maximum benefits, however, we need to think about how to harness this potential synergy. This will mean devising new styles of system, rather than merely seeking to enhance the old models.In this paper we describe our model for coupling hypertext and a knowledge based (...) system, and then go on to describe two prototype systems which attempt to exploit this composite framework. The first application concerns animated hypertext which accords the text a central role whilst giving access to all the advantages of a knowledge based system. The second suggests how we can augment the hypertext by providing links which reflect the conceptual model of a knowledge based system in the domain, so as to provide a more structured traversal of the text. (shrink)
This paper provides a formal description of two legal domains. In addition, we describe the generation of various artificial datasets from these domains and explain the use of these datasets in previous experiments aligning learning and reasoning. These resources are made available for the further investigation of connections between arguments, cases and rules. The datasets are publicly available at https://github.com/CorSteging/LegalResources.
A framework to support ?Arguing from Experience? using groups of collaborating agents (termed participant agents/players) is described. The framework is an extension of the PISA multi-party arguing from experience framework. The original version of PISA allowed n participants to promote n goals (one each) for a given example. The described extension of PISA allows individuals with the same goals to pool their resources by forming ?groups?. The framework is fully described and its effectiveness illustrated using a number of classification scenarios. (...) The main finding is that by using groups more accurate results can be obtained than when agents operate in isolation. (shrink)
The first issue of _Artificial Intelligence and Law_ journal was published in 1992. This special issue marks the 30th anniversary of the journal by reviewing the progress of the field through thirty commentaries on landmark papers and groups of papers from that journal.