Work on a computer program called SMILE + IBP (SMart Index Learner Plus Issue-Based Prediction) bridges case-based reasoning and extracting information from texts. The program addresses a technologically challenging task that is also very relevant from a legal viewpoint: to extract information from textual descriptions of the facts of decided cases and apply that information to predict the outcomes of new cases. The program attempts to automatically classify textual descriptions of the facts of legal problems in terms of Factors, a (...) set of classification concepts that capture stereotypical fact patterns that effect the strength of a legal claim, here trade secret misappropriation. Using these classifications, the program can evaluate and explain predictions about a problem’s outcome given a database of previously classified cases. This paper provides an extended example illustrating both functions, prediction by IBP and text classification by SMILE, and reports empirical evaluations of each. While IBP’s results are quite strong, and SMILE’s much weaker, SMILE + IBP still has some success predicting and explaining the outcomes of case scenarios input as texts. It marks the first time to our knowledge that a program can reason automatically about legal case texts. (shrink)
We provide a retrospective of 25 years of the International Conference on AI and Law, which was first held in 1987. Fifty papers have been selected from the thirteen conferences and each of them is described in a short subsection individually written by one of the 24 authors. These subsections attempt to place the paper discussed in the context of the development of AI and Law, while often offering some personal reactions and reflections. As a whole, the subsections build into (...) a history of the last quarter century of the field, and provide some insights into where it has come from, where it is now, and where it might go. (shrink)
The first issue of Artificial Intelligence and Law journal was published in 1992. This paper offers some commentaries on papers drawn from the Journal’s third decade. They indicate a major shift within Artificial Intelligence, both generally and in AI and Law: away from symbolic techniques to those based on Machine Learning approaches, especially those based on Natural Language texts rather than feature sets. Eight papers are discussed: two concern the management and use of documents available on the World Wide Web, (...) and six apply machine learning techniques to a variety of legal applications. (shrink)
In this work we study, design, and evaluate computational methods to support interpretation of statutory terms. We propose a novel task of discovering sentences for argumentation about the meaning of statutory terms. The task models the analysis of past treatment of statutory terms, an exercise lawyers routinely perform using a combination of manual and computational approaches. We treat the discovery of sentences as a special case of ad hoc document retrieval. The specifics include retrieval of short texts, specialized document types, (...) and, above all, the unique definition of document relevance provided in detailed annotation guidelines. To support our experiments we assembled a data set comprising 42 queries which we plan to release to the public in the near future in order to support further research. Most importantly, we investigate the feasibility of developing a system that responds to a query with a list of sentences that mention the term in a way that is useful for understanding and elaborating its meaning. This is accomplished by a systematic assessment of different features that model the sentences’ usefulness for interpretation. We combine features into a compound measure that accounts for multiple aspects. The definition of the task, the assembly of the data set, and the detailed task analysis provide a solid foundation for employing a learning-to-rank approach. (shrink)
In this short note, we discuss several aspectsof dimensions and the related constructof factors. We concentrate on those aspectsthat are relevant to articles in this specialissue, especially those dealing with the analysisof the wild animal cases discussed inBerman and Hafner's 1993 ICAIL article. We reviewthe basic ideas about dimensions,as used in HYPO, and point out differences withfactors, as used in subsequent systemslike CATO. Our goal is to correct certainmisconceptions that have arisen over the years.
Reasoners compare problems to prior cases to draw conclusions about a problem and guide decision making. All Case-Based Reasoning (CBR) employs some methods for generalizing from cases to support indexing and relevance assessment and evidences two basic inference methods: constraining search by tracing a solution from a past case or evaluating a case by comparing it to past cases. Across domains and tasks, however, humans reason with cases in subtly different ways evidencing different mixes of and mechanisms for these components.In (...) recent CBR research in Artificial Intelligence (AI), five paradigmatic approaches have emerged: statistically-oriented, model-based, planning/design-oriented, exemplar-based, and adversarial or precedent-based. The paradigms differ in the assumptions they make about domain models, the extent to which they support symbolic case comparison, and the kinds of inferences for which they employ cases. (shrink)
Assessment in ethics education faces a challenge. From the perspectives of teachers, students, and third-party evaluators like the Accreditation Board for Engineering and Technology and the National Institutes of Health, assessment of student performance is essential. Because of the complexity of ethical case analysis, however, it is difficult to formulate assessment criteria, and to recognize when students fulfill them. Improvement in students’ moral reasoning skills can serve as the focus of assessment. In previous work, Rosa Lynn Pinkus and Claire Gloeckner (...) developed a novel instrument for assessing moral reasoning skills in bioengineering ethics. In this paper, we compare that approach to existing assessment techniques, and evaluate its validity and reliability. We find that it is sensitive to knowledge gain and that independent coders agree on how to apply it. (shrink)
In this short note, we discuss several aspectsof “dimensions” and the related constructof “factors”. We concentrate on those aspectsthat are relevant to articles in this specialissue, especially those dealing with the analysisof the wild animal cases discussed inBerman and Hafner's 1993 ICAIL article. We reviewthe basic ideas about dimensions,as used in HYPO, and point out differences withfactors, as used in subsequent systemslike CATO. Our goal is to correct certainmisconceptions that have arisen over the years.
This article provides an overview of, and thematic justification for, the special issue of the journal of Artificial Intelligence and Law entitled “E-Discovery”. In attempting to define a characteristic “AI & Law” approach to e-discovery, and since a central theme of AI & Law involves computationally modeling legal knowledge, reasoning and decision making, we focus on the theme of representing and reasoning with litigators’ theories or hypotheses about document relevance through a variety of techniques including machine learning. We also identify (...) two emerging techniques for enabling users’ document queries to better express the theories of relevance and connect them to documents: social network analysis and a hypothesis ontology. (shrink)
This article describes recent jurisprudential accountsof analogical legal reasoning andcompares them in detail to the computational modelof case-based legal argument inCATO. The jurisprudential models provide a theoryof relevance based on low-levellegal principles generated in a process ofcase-comparing reflective adjustment. Thejurisprudential critique focuses on the problemsof assigning weights to competingprinciples and dealing with erroneously decidedprecedents. CATO, a computerizedinstructional environment, employs ArtificialIntelligence techniques to teach lawstudents how to make basic legal argumentswith cases. The computational modelhelps students test legal hypotheses againsta database of (...) legal cases, draws analogiesto problem scenarios from the database, andcomposes arguments by analogy with a setof argument moves. The CATO model accountsfor a number of the important featuresof the jurisprudential accounts, includingimplementing a kind of reflective adjustment.It also avoids some of the problems identifiedin the critique; for instance, it deals withweights in a non-numeric, context-sensitivemanner. The article concludes by describingthe contributions AI research can make tojurisprudential investigations of complexcognitive phenomena of legal reasoning. Forinstance, unlike the jurisprudential models,CATO provides a detailed account of how togenerate multiple interpretations of a citedcase, downplaying or emphasizing the legalsignificance of distinctions in terms of thepurposes of the law as the argument contextdemands. (shrink)
The research described here explores the idea of using Supreme Court oral arguments as pedagogical examples in first year classes to help students learn the role of hypothetical reasoning in law. The article presents examples of patterns of reasoning with hypotheticals in appellate legal argument and in the legal classroom and a process model of hypothetical reasoning that relates them to work in cognitive science and Artificial Intelligence. The process model describes the relationships between an advocate’s proposed test for deciding (...) a case or issue, the facts of the hypothetical and of the case to be decided, and the often conflicting legal principles and policies underlying the issue. The process model of hypothetical reasoning has been partially implemented in a computerized teaching environment, LARGO (“Legal ARgument Graph Observer”) that helps students identify, analyze, and reflect on episodes of hypothetical reasoning in oral argument transcripts. Using LARGO, students reconstruct examples of hypothetical reasoning in the oral arguments by representing them in simple diagrams that focus students on the proposed test, the hypothetical challenge to the test, and the responses to the challenge. The program analyzes the diagrams and provides feedback to help students complete the diagrams and reflect on the significance of the hypothetical reasoning in the argument. The article reports the results of experiments evaluating instruction of first year law students at the University of Pittsburgh using the LARGO program as applied to Supreme Court personal jurisdiction cases. The learning results so far have been mixed. Instruction with LARGO has been shown to help law student volunteers with lower LSAT scores learn skills and knowledge regarding hypothetical reasoning better than a text-based approach, but not when the students were required to participate. On the other hand, the diagrams students produce with LARGO have been shown to have some diagnostic value, distinguishing among law students on the basis of LSAT scores, posttest performance, and years in law school. This lends support to the underlying model of hypothetical argument and suggests using LARGO as a pedagogically diagnostic tool. (shrink)
This handbook offers a deep analysis of the main forms of legal reasoning and argumentation from both a logical-philosophical and legal perspective. These forms are covered in an exhaustive and critical fashion, and the handbook accordingly divides in three parts: the first one introduces and discusses the basic concepts of practical reasoning. The second one discusses the main general forms of reasoning and argumentation relevant for legal discourse. The third one looks at their application in law as well as at (...) the different areas of legal reasoning. The handbook’s division in three parts reflects its conceptual architecture, since legal reasoning and argumentation are considered in relation to the more general types of reasoning. (shrink)