Computational approaches to the law have frequently been characterized as being formalistic implementations of the syllogistic model of legal cognition: using insufficient or contradictory data, making analogies, learning through examples and experiences, applying vague and imprecise standards. We argue that, on the contrary, studies on neuralnetworks and fuzzy reasoning show how AI & law research can go beyond syllogism, and, in doing that, can provide substantial contributions to the law.
As a criterion of a good firm, a lucrative and growing business has been said to be important. Recently, however, high profitability and high growth potential are insufficient for the criteria, because social influences exerted by recent firms have been extremely significant. In this paper, high social relationship is added to the list of the criteria. Empirical corporate social performance versus corporate financial performance (CSP–CFP) relationship studies that consider social relationship are very limited in Japan, and there are no definite (...) conclusions for the studies in the world, because of scant data and the inappropriate methods, especially for supporting linear hypothesis which these studies are based on. In this paper, the CSP–CFP relationship is analyzed by an artificial neuralnetworks model, which can deal with a non-linear relationship, using 10-year follow-up survey data. (shrink)
The literature on common pool resource (CPR) governance lists numerous factors that influence whether a given CPR system achieves ecological long-term sustainability. Up to now there is no comprehensive model to integrate these factors or to explain success within or across cases and sectors. Difficulties include the absence of large-N-studies (Poteete 2008), the incomparability of single case studies, and the interdependence of factors (Agrawal and Chhatre 2006). We propose (1) a synthesis of 24 success factors based on the current SES (...) framework and a literature review; (2) the application of neuralnetworks on a database of CPR management case studies in an attempt to test the viability of this synthesis. This method allows us to obtain an implicit quantitative and rather precise model of the interdependencies in CPR systems. Given such a model, every success factor in each case can be manipulated separately, yielding different predictions for success. This could become be a fast and inexpensive way to analyze, predict and optimize performance for communities world-wide facing CPR challenges. Existing theoretical frameworks could be improved as well. (shrink)
This paper examines the use of connectionism (neuralnetworks) in modelling legal reasoning. I discuss how the implementations of neuralnetworks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neuralnetworks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of (...)neuralnetworks in law. It then examines some implementations undertaken in law and criticises their legal theoretical naïvete. It then presents a lessons from the implementations which researchers must bear in mind if they wish to build neuralnetworks which are justified by legal theories. (shrink)
I address whether neuralnetworks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neuralnetworks compute—they perform computations. (2) Some neuralnetworks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neuralnetworks. (3) Other neuralnetworks compute in a non-classical way. (4) Yet other neuralnetworks (...) do not perform computations. Brains may well fall into this last class. (shrink)
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and (...) that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results. (shrink)
Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural activity (...) that can support cognitive and emotional processes underlying human creativity. (shrink)
Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model of (...) the interconnections of neurons in thebrain. I describe two cases that take the connectionist approach a stepfurther by using genetic algorithms, a form of evolutionary computationthat explicitly models Darwinian mechanisms. These cases show thatDarwinian mechanisms can make novel discoveries of complex, previouslyunknown patterns. With some caveats, they lend support to evolutionaryepistemology. (shrink)
In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind (...) and anorganism. My approach in the current paper isto explore this and other key themes inneurosemantics through the use of computermodels of neuralnetworks embodied and evolvedin virtual organisms. The models allow for thelaying bare of the causal economies of entireyet simple artificial organisms so that therelations between the neural bases of, forinstance, representation in perception andmemory can be regarded in the context of anentire organism. On the basis of thesesimulations, I argue for an account ofneurosemantics adequate for the solution of theeconomy problem. (shrink)
The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their (...) solutions are given. Some consequences are explored, in particular, the neural unsolvability of the Stability Problem for neuralnetworks. (shrink)
The prime objective of this paper is to conduct phoneme categorization experiments for Indian languages. In this direction a major effort has been made to categorize Hindi phonemes using a time delay neural network (TDNN), and compare the recognition scores with other languages. A total of six neural nets aimed at the major coarse of phonetic classes in Hindi were trained. Evaluation of each net on 350 training tokens and 40 test tokens revealed a 99% recognition rate for (...) vowel classes, 87% for unvoiced stops, 82% for voiced stops, 94.7% for semi vowels, 98.1% for nasals and 96.4% for fricatives. A new feature vector normalisation technique has been proposed to improve the recognition scores. (shrink)
Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible foundation from which (...) predicative structures may arise as an emergent phenomenon without the positing of a representing subject. These models go some way in addressing the dual constraints of phenomenological accuracy and neurophysiological plausibility that ought to guide all projects devoted to discovering the physical basis of human experience. (shrink)
. Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neuralnetworks, logic programs, and (...) evolutionary systems are instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system. (shrink)
Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neuralnetworks with a (...) recurrent architecture. (shrink)
Artificial neuralnetworks (ANNs) are new mathematical techniques which can be used for modelling real neuralnetworks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use (...) also leads us into the problems of induction and probability. Ever since David Hume expressed his famous doubts about induction, the principles of scientific inference have been a central concern for philosophers. (shrink)
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neuralnetworks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic..
According to Aristotle, "to be learning something is the greatest of pleasures not only to the philosopher but also to the rest of mankind," (Poetics 1448b). But even as he affirms the unbounded human capacity for integrating new experience with existing knowledge, he alludes to a significant exception: "The sight of certain things gives us pain, but we enjoy looking at the most exact images of them, whether the forms of animals which we greatly despise or of corpses." Our capacity (...) for learning is happily engaged in viewing representations of painful objects, but not, it seems, in viewing the objects themselves. When an experience is intensely painful, what then is a rational animal to do? We can neither disable our learning process, nor erase its traces. In the face of intense pain, horror, or terror, learning and remembrance cause no pleasure but rather persistent psychological pain and disruption. The memorious mind reverberates with trauma. (shrink)
Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neuralnetworks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted (...) as an analogy making process. Because it is not the same as reasoning in mathematics or the physical sciences, it is necessary to use a method, which incorporates first the ability to specify likelihood and second the opportunity of including known court decisions. We use for modelling the analogy making process in legal reasoning neuralnetworks and fuzzy systems. In the first part of the paper a neural network is described to identify precedents of immaterial damages. The second application presents a fuzzy system for determining the required waiting period after traffic accidents. Both examples demonstrate how to model reasoning in legal applications analogous to recent decisions: first, by learning a system with court decisions, and second, by analyzing, modelling and testing the decision making with a fuzzy system. (shrink)
There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neuralnetworks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as (...) non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas. (shrink)
The missing ingredients in efforts to develop neuralnetworks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neuralnetworks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of (...) androids of superior intelligence and physical composition. These raise future moral questions worthy of speculation. (shrink)
Page's manifesto makes a case for localist representations in neuralnetworks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neuralnetworks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neuralnetworks are discussed with an emphasis on grammar learning.
The dynamical behaviour of a very general model of neuralnetworks with random asymmetric synaptic weights is investigated in the presence of random thresholds. Using mean-field equations, the bifurcations of the fixed points and the change of regime when varying control parameters are established. Different areas with various regimes are defined in the parameter space. Chaos arises generically by a quasi-periodicity route.
Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neuralnetworks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this (...) variance and on the slope of the transfer function, that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. The route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is. Our results suggest that such high-dimensional networks behave like low-dimensional dynamical systems. (shrink)
This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neuralnetworks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support information (...) processing abilities. I present an example where controlled transient evolutions in a neural network, are used to model the regulation of motor activities by the cerebellar cortex. (shrink)
We examine the relative timing of numerous brain regions involved in human decisions that are based on external criteria, learned information, personal preferences, or unconstrained internal considerations. Using magnetoencephalography (MEG) and advanced signal analysis techniques, we were able to non-invasively reconstruct oscillations of distributed neuralnetworks in the high-gamma frequency band (60–150 Hz). The time course of the observed neural activity suggested that two-alternative forced choice tasks are processed in four overlapping stages: processing of sensory input, option (...) evaluation, intention formation, and action execution. Visual areas are activated fi rst, and show recurring activations throughout the entire decision process. The temporo-occipital junction and the intraparietal sulcus are active during evaluation of external values of the options, 250–500 ms after stimulus presentation. Simultaneously, personal preference is mediated by cortical midline structures. Subsequently, the posterior parietal and superior occipital cortices appear to encode intention, with different subregions being responsible for different types of choice. The cerebellum and inferior parietal cortex are recruited for internal generation of decisions and actions, when all options have the same value. Action execution was accompanied by activation peaks in the contralateral motor cortex. These results suggest that high-gamma oscillations as recorded by MEG allow a reliable reconstruction of decision processes with excellent spatiotemporal resolution. (shrink)
Recent computer simulations of evolving neuralnetworks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.
The authors have discovered a systematic, intelligent and potentially automatic method to detect errors in handbooks and stop their transmission using unrecognised relationships between materials properties. The scientific community relies on the veracity of scientific data in handbooks and databases, some of which have a long pedigree covering several decades. Although various outlier-detection procedures are employed to detect and, where appropriate, remove contaminated data, errors, which had not been discovered by established methods, were easily detected by our artificial neural (...) network in tables of properties of the elements. We started using neuralnetworks to discover unrecognised relationships between materials properties and quickly found that they were very good at finding inconsistencies in groups of data. They reveal variations from 10 to 900% in tables of property data for the elements and point out those that are most probably correct. Compared with the statistical method adopted by Ashby and co-workers [Proc. R. Soc. Lond. Ser. A 454 (1998) p. 1301, 1323], this method locates more inconsistencies and could be embedded in database software for automatic self-checking. We anticipate that our suggestion will be a starting point to deal with this basic problem that affects researchers in every field. The authors believe it may eventually moderate the current expectation that data field error rates will persist at between 1 and 5%. (shrink)
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants (...) rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion. (shrink)
Does connectionism spell doom for folk psychology? I examine the proposal that cognitive representational states such as beliefs can play no role if connectionist models - - interpreted as radical new cognitive theories -- take hold and replace other cognitive theories. Though I accept that connectionist theories are radical theories that shed light on cognition, I reject the conclusion that neuralnetworks do not represent. Indeed, I argue that neuralnetworks may actually give us a better (...) working notion of cognitive representational states such as beliefs, and in so doing give us a better understanding of how these states might be instantiated in neural wetware. (shrink)
Exploratory analysis is an area of increasing interest in the computational linguistics arena. Pragmatically speaking, exploratory analysis may be paraphrased as natural language processing by means of analyzing large corpora of text. Concerning the analysis, appropriate means are statistics, on the one hand, and artificial neuralnetworks, on the other hand. As a challenging application area for exploratory analysis of text corpora we may certainly identify text databases, be it information retrieval or information filtering systems. With this paper (...) we present recent findings of exploratory analysis based on both statistical and neural models applied to legal text corpora. Concerning the artificial neuralnetworks, we rely on a model adhering to the unsupervised learning paradigm. This choice appears naturally when taking into account the specific properties of large text corpora where one is faced with the fact that input-output-mappings as required by supervised learning models cannot be provided beforehand to a satisfying extent. This is due to the fact of the highly changing contents of text archives. In a nutshell, artificial neuralnetworks count for their highly robust behavior regarding the parameters for model optimization. In particular, we found statistical classification techniques much more susceptible to minor parameter variations than unsupervised artificial neuralnetworks. In this paper we describe two different lines of research in exploratory analysis. First, we use the classification methods for concept analysis. The general goal is to uncover different meanings of one and the same natural language concept. A task that, obviously, is of specific importance during the creation of thesauri. As a convenient environment to present the results we selected the legal term of neutrality, which is a perfect representative of a concept having a number of highly divergent meanings. Second, we describe the classification methods in the setting of document classification. The ultimate goal in such an application is to uncover semantic similarities of various text documents in order to increase the efficiency of an information retrieval system. In this sense, document classification has its fixed position in information retrieval research from the very beginning. Nowadays renewed massive interest in document classification may be witnessed due to the appearance of large-scale digital libraries. (shrink)
Random simulation of complex dynamical systems is generally used in order to obtain information about their asymptotic behaviour (i.e., when time or size of the system tends towards infinity). A fortunate and welcome circumstance in most of the systems studied by physicists, biologists, and economists is the existence of an invariant measure in the state space allowing determination of the frequency with which observation of asymptotic states is possible. Regions found between contour lines of the surface density of this invariant (...) measure are called confiners. An example of such confiners is given for a formal neural network capable of learning. Finally, an application of this methodology is proposed in studying dependency of the network's invariant measure with regard to: 1) the mode of neurone updating (parallel or sequential), and 2) boundary conditions of the network (searching for phase transitions). (shrink)
Clahsen's theory raises problems that make it seem untenable. As an alternative, a constructivist neural network model is reported that develops a modular architecture and in which a single associative mechanism produces all inflections, displaying an emergent dissociation between regular and irregular verbs. Thus, Clahsen's rejection of associative models of inflection concerns only a subgroup of these models.
Page proposes a simple, localist, lateral inhibitory network for implementing a selection process that approximately conforms to the Luce choice rule. I describe another localist neural mechanism for selection in accordance with the Luce choice rule. The mechanism implements an independent race model. It consists of parallel, independent nerve fibers connected to a winner-take-all cluster, which records the winner of the race.
In order to benefit from the advantages of localist coding, neural models that feature winner-take-all representations at the top level of a network hierarchy must still solve the computational problems inherent in distributed representations at the lower levels.
The present study examined the neural substrate of two classes of quantifiers: numerical quantifiers like â at least threeâ which require magnitude processing, and logical quantifiers like â someâ which can be understood using a simple form of perceptual logic. We assessed these distinct classes of quantifiers with converging observations from two sources: functional imaging data from healthy adults, and behavioral and structural data from patients with corticobasal degeneration who have acalculia. Our findings are consistent with the claim that (...) numerical quantifier comprehension depends on a lateral parietal-dorsolateral prefrontal network, but logical quantifier comprehension depends instead on a rostral medial prefrontal-posterior cingulate network. These observations emphasize the important contribution of abstract number knowledge to the meaning of numerical quantifiers in semantic memory and the potential role of a logic-based evaluation in the service of non-numerical quantifiers. (shrink)
Functional neuroimaging studies allow examination of the cerebral networks involved in human behavior. For pathological aggression, several studies have reported a involvement of frontal and temporal areas, reflecting disruption of emotional regulatory systems. Recent genetic studies that bring together reward system dysfunction and violent behavior.
There is a distinction between locality and modularity. These two terms have often been used interchangeably in the target article and commentary. Using this distinction we argue in favor of a modularity. In addition we also argue that both PDP-type networks and box-and-arrow models have their own strengths and pitfalls.