This paper proposes several recurrent neural network-based models for recognizing requisite and effectuation parts in Legal Texts. Firstly, we propose a modification of BiLSTM-CRF model that allows the use of external features to improve the performance of deep learning models in case large annotated corpora are not available. However, this model can only recognize RE parts which are not overlapped. Secondly, we propose two approaches for recognizing overlapping RE parts including the cascading approach which uses the sequence of BiLSTM-CRF models (...) and the unified model approach with the multilayer BiLSTM-CRF model and the multilayer BiLSTM-MLP-CRF model. Experimental results on two Japan law RRE datasets demonstrated advantages of our proposed models. For the Japanese National Pension Law dataset, our approaches obtained an \ score of 93.27% and exhibited a significant improvement compared to previous approaches. For the Japan Civil Code RRE dataset which is written in English, our approaches produced an \ score of 78.24% in recognizing RE parts that exhibited a significant improvement over strong baselines. In addition, using external features and in-domain pre-trained word embeddings also improved the performance of RRE systems. (shrink)
We present our method for tackling a legal case retrieval task by introducing our method of encoding documents by summarizing them into continuous vector space via our phrase scoring framework utilizing deep neural networks. On the other hand, we explore the benefits from combining lexical features and latent features generated with neural networks. Our experiments show that lexical features and latent features generated with neural networks complement each other to improve the retrieval system performance. Furthermore, our experimental results suggest the (...) importance of case summarization in different aspects: using provided summaries and performing encoded summarization. Our approach achieved F1 of 65.6% and 57.6% on the experimental datasets of legal case retrieval tasks. (shrink)
There are cases in which the literal interpretation of statutes may lead to counterintuitive consequences. When such cases go to high courts, judges may handle these counterintuitive consequences by identifying problematic rule conditions. Given that the law consists of a large number of rule conditions, it is demanding and exhaustive to figure out which condition is problematic. For solving this problem, our work aims to assist judges in civil law systems to resolve counterintuitive consequences using logic program representation of statutes (...) and Legal Debugging. The core principle of Legal Debugging is to cooperate with a user to find a culprit, a root cause of counterintuitive consequences. This article proposes an algorithm to resolve a culprit. Since the statutes are represented by logic rules but changes in law are initiated by cases, we adopt a prototypical case with judgement specified by a set of rules. Then, to resolve a culprit, we reconstruct a program so that it provides reasons as if we applied case-based reasoning to a new set of prototypical cases with judgement, which include a new set of facts relevant to a considering case. (shrink)
Case law retrieval is the task of locating truly relevant legal cases given an input query case. Unlike information retrieval for general texts, this task is more complex with two phases (legal case retrieval and legal case entailment) and much harder due to a number of reasons. First, both the query and candidate cases are long documents consisting of several paragraphs. This makes it difficult to model with representation learning that usually has restriction on input length. Second, the concept of (...) relevancy in this domain is defined based on the legal relation that goes beyond the lexical or topical relevance. This is a real challenge because normal text matching will not work. Third, building a large and accurate legal case dataset requires a lot of effort and expertise. This is obviously an obstacle to creating enough data for training deep retrieval models. In this paper, we propose a novel approach called supporting model that can deal with both phases. The underlying idea is the case–case supporting relation and the paragraph–paragraph as well as the decision-paragraph matching strategy. In addition, we propose a method to automatically create a large weak-labeling dataset to overcome the lack of data. The experiments showed that our solution has achieved the state-of-the-art results for both case retrieval and case entailment phases. (shrink)
Formalising deontic concepts, such as obligation, prohibition and permission, is normally carried out in a modal logic with a possible world semantics, in which some worlds are better than others. The main focus in these logics is on inferring logical consequences, for example inferring that the obligation O q is a logical consequence of the obligations O p and O. In this paper we propose a non-modal approach in which obligations are preferred ways of satisfying goals expressed in first-order logic. (...) To say that p is obligatory, but may be violated, resulting in a less than ideal situation s, means that the task is to satisfy the goal p ∨ s, and that models in which p is true are preferred to models in which s is true. Whereas, in modal logic, the preference relation between possible worlds is part of the semantics of the logic, in this non-modal approach, the preference relation between first-order models is external to the logic. Although our main focus is on satisfying goals, we also formulate a notion of logical consequence, which is comparable to the notion of logical consequence in modal deontic logic. In this formalisation, an obligation O p is a logical consequence of goals G, when p is true in all best models of G. We show how this non-modal approach to the treatment of deontic concepts deals with problems of contrary-to-duty obligations and normative conflicts, and argue that the approach is useful for many other applications, including abductive explanations, defeasible reasoning, combinatorial optimisation, and reactive systems of the production system variety. (shrink)
Multimorbidity, the presence of multiple health conditions that must be addressed, is a particularly difficult situation in patient management raising issues such as the use of multiple drugs and drug-disease interactions. Clinical Guidelines are evidence-based statements which provide recommendations for specific health conditions but are unfit for the management of multiple co-occurring health situations. To leverage these evidence-based documents, it becomes necessary to combine them. In this paper, using a case example, we explore the use of argumentation schemes to reason (...) and combine evidence-based recommendations from clinical guidelines, expected effects, conflicts stemming from said recommendations, and preferences regarding treatment goals. We compare the results of reasoning using the schemes for practical reasoning and argument from negative consequences in the Carneades Argumentation System with those of ASPIC-G, an extension of the artificial intelligence system ASPIC+. (shrink)
Natural language processing techniques contribute more and more in analyzing legal documents recently, which supports the implementation of laws and rules using computers. Previous approaches in representing a legal sentence often based on logical patterns that illustrate the relations between concepts in the sentence, often consist of multiple words. Those representations cause the lack of semantic information at the word level. In our work, we aim to tackle such shortcomings by representing legal texts in the form of abstract meaning representation, (...) a graph-based semantic representation that gains lots of polarity in NLP community recently. We present our study in AMR Parsing and AMR-to-text Generation specifically for legal domain. We also introduce JCivilCode, a human-annotated legal AMR dataset which was created and verified by a group of linguistic and legal experts. We conduct an empirical evaluation of various approaches in parsing and generating AMR on our own dataset and show the current challenges. Based on our observation, we propose our domain adaptation method applying in the training phase and decoding phase of a neural AMR-to-text generation model. Our method improves the quality of text generated from AMR graph compared to the baseline model. 2018; and “Legal Text Generation fromMeaning Representation”, published in the 32nd International Conference on Legal Knowledge and Information Systems 2019.). (shrink)
This paper analyses and compares some of the automated reasoners that have been used in recent research for compliance checking. Although the list of the considered reasoners is not exhaustive, we believe that our analysis is representative enough to take stock of the current state of the art in the topic. We are interested here in formalizations at the _first-order_ level. Past literature on normative reasoning mostly focuses on the _propositional_ level. However, the propositional level is of little usefulness for (...) concrete LegalTech applications, in which compliance checking must be enforced on (large) sets of individuals. Furthermore, we are interested in technologies that are _freely available_ and that can be further investigated and compared by the scientific community. In other words, this paper does not consider technologies only employed in industry and/or whose source code is non-accessible. This paper formalizes a selected use case in the considered reasoners and compares the implementations, also in terms of simulations with respect to shared synthetic datasets. The comparison will highlight that lot of further research still needs to be done to integrate the benefits featured by the different reasoners into a single standardized first-order framework, suitable for LegalTech applications. All source codes are freely available at https://github.com/liviorobaldo/compliancecheckers, together with instructions to locally reproduce the simulations. (shrink)
This book constitutes the thoroughly refereed joint post-proceedings of three international workshops organized by the Japanese Society for Artificial Intelligence, held in Tokyo, Japan in June 2006 during the 20th Annual Conference JSAI 2006. The volume starts with eight award winning papers of the JSAI 2006 main conference that are presented along with the 21 revised full workshop papers, carefully reviewed and selected for inclusion in the volume.
In the court of law, a person can be punished for attempting to commit a crime. An open issue in the study of Artificial Intelligence and Law is whether the law of attempts could be formally modelled. There are distinct legal rules for determining attempted crime whereas the last-act rule (also called proximity rule) represents the strictest approach. In this paper, we provide a formal model of the last-act rule using structured argumentation.