Computational approaches to the law have frequently been characterized as being formalistic implementations of the syllogistic model of legal cognition: using insufficient or contradictory data, making analogies, learning through examples and experiences, applying vague and imprecise standards. We argue that, on the contrary, studies on neuralnetworks and fuzzy reasoning show how AI & law research can go beyond syllogism, and, in doing that, can provide substantial contributions to the law.
As a criterion of a good firm, a lucrative and growing business has been said to be important. Recently, however, high profitability and high growth potential are insufficient for the criteria, because social influences exerted by recent firms have been extremely significant. In this paper, high social relationship is added to the list of the criteria. Empirical corporate social performance versus corporate financial performance (CSP–CFP) relationship studies that consider social relationship are very limited in Japan, and there are no definite (...) conclusions for the studies in the world, because of scant data and the inappropriate methods, especially for supporting linear hypothesis which these studies are based on. In this paper, the CSP–CFP relationship is analyzed by an artificial neuralnetworks model, which can deal with a non-linear relationship, using 10-year follow-up survey data. (shrink)
This paper examines the use of connectionism (neuralnetworks) in modelling legal reasoning. I discuss how the implementations of neuralnetworks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neuralnetworks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of (...)neuralnetworks in law. It then examines some implementations undertaken in law and criticises their legal theoretical naïvete. It then presents a lessons from the implementations which researchers must bear in mind if they wish to build neuralnetworks which are justified by legal theories. (shrink)
The literature on common pool resource (CPR) governance lists numerous factors that influence whether a given CPR system achieves ecological long-term sustainability. Up to now there is no comprehensive model to integrate these factors or to explain success within or across cases and sectors. Difficulties include the absence of large-N-studies (Poteete 2008), the incomparability of single case studies, and the interdependence of factors (Agrawal and Chhatre 2006). We propose (1) a synthesis of 24 success factors based on the current SES (...) framework and a literature review; (2) the application of neuralnetworks on a database of CPR management case studies in an attempt to test the viability of this synthesis. This method allows us to obtain an implicit quantitative and rather precise model of the interdependencies in CPR systems. Given such a model, every success factor in each case can be manipulated separately, yielding different predictions for success. This could become be a fast and inexpensive way to analyze, predict and optimize performance for communities world-wide facing CPR challenges. Existing theoretical frameworks could be improved as well. (shrink)
In this work it is presented a methodology for the development of a pattern recognition system using classification methods as discriminant analysis and artificial neuralnetworks. In this methodology, the statistical analysis is contemplated, with the purpose of retaining the observations and the important characteristics that can produce an appropriate classification, and allows, as well, to detect outliers’ observations, multicolinearity between variables, among other things. Chlorophyll a fluorescence OJIP signals measured from Pisum sativum leaves belonging to different drought (...) stress resistance groups are correctly classified using the proposed here methodology. (shrink)
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and (...) that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results. (shrink)
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode (...) contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models. We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. (shrink)
Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural activity (...) that can support cognitive and emotional processes underlying human creativity. (shrink)
I address whether neuralnetworks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neuralnetworks compute—they perform computations. (2) Some neuralnetworks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neuralnetworks. (3) Other neuralnetworks compute in a non-classical way. (4) Yet other neuralnetworks (...) do not perform computations. Brains may well fall into this last class. (shrink)
Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model of (...) the interconnections of neurons in thebrain. I describe two cases that take the connectionist approach a stepfurther by using genetic algorithms, a form of evolutionary computationthat explicitly models Darwinian mechanisms. These cases show thatDarwinian mechanisms can make novel discoveries of complex, previouslyunknown patterns. With some caveats, they lend support to evolutionaryepistemology. (shrink)
In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind (...) and anorganism. My approach in the current paper isto explore this and other key themes inneurosemantics through the use of computermodels of neuralnetworks embodied and evolvedin virtual organisms. The models allow for thelaying bare of the causal economies of entireyet simple artificial organisms so that therelations between the neural bases of, forinstance, representation in perception andmemory can be regarded in the context of anentire organism. On the basis of thesesimulations, I argue for an account ofneurosemantics adequate for the solution of theeconomy problem. (shrink)
. Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neuralnetworks, logic programs, and (...) evolutionary systems are instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system. (shrink)
There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neuralnetworks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as (...) non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas. (shrink)
Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neuralnetworks with a (...) recurrent architecture. (shrink)
Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neuralnetworks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted (...) as an analogy making process. Because it is not the same as reasoning in mathematics or the physical sciences, it is necessary to use a method, which incorporates first the ability to specify likelihood and second the opportunity of including known court decisions. We use for modelling the analogy making process in legal reasoning neuralnetworks and fuzzy systems. In the first part of the paper a neural network is described to identify precedents of immaterial damages. The second application presents a fuzzy system for determining the required waiting period after traffic accidents. Both examples demonstrate how to model reasoning in legal applications analogous to recent decisions: first, by learning a system with court decisions, and second, by analyzing, modelling and testing the decision making with a fuzzy system. (shrink)
Human skin detection is an essential phase in face detection and face recognition when using color images. Skin detection is very challenging because of the differences in illumination, differences in photos taken using an assortment of cameras with their own characteristics, range of skin colors due to different ethnicities, and other variations. Numerous methods have been used for human skin color detection, including the Gaussian model, rule-based methods, and artificial neuralnetworks. In this article, we introduce a novel (...) technique of using the neural network to enhance the capabilities of skin detection. Several different entities were used as inputs of a neural network, and the pros and cons of different color spaces are discussed. Also, a vector was used as the input to the neural network that contains information from three different color spaces. The comparison of the proposed technique with existing methods in this domain illustrates the effectiveness and accuracy of the proposed approach. Tests were done on two databases, and the results show that the neural network has better precision and accuracy rate, as well as comparable recall and specificity, compared with other methods. (shrink)
The missing ingredients in efforts to develop neuralnetworks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neuralnetworks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of (...) androids of superior intelligence and physical composition. These raise future moral questions worthy of speculation. (shrink)
Artificial neuralnetworks (ANNs) are new mathematical techniques which can be used for modelling real neuralnetworks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use (...) also leads us into the problems of induction and probability. Ever since David Hume expressed his famous doubts about induction, the principles of scientific inference have been a central concern for philosophers. (shrink)
Localizing content in neuralnetworks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neuralnetworks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification (...) and provide a novel solution to the disjunction problem, which hinges on the precise nature of the causal-informational framework for content advocated herein. (shrink)
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neuralnetworks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic..
Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neuralnetworks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this (...) variance and on the slope of the transfer function, that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. The route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is. Our results suggest that such high-dimensional networks behave like low-dimensional dynamical systems. (shrink)
Localizing content in neuralnetworks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neuralnetworks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification, (...) and provide a novel solution to the disjunction problem, which hinges on the precise nature of the causal-informational framework for content advocated herein. (shrink)
This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neuralnetworks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support information (...) processing abilities. I present an example where controlled transient evolutions in a neural network, are used to model the regulation of motor activities by the cerebellar cortex. (shrink)
The dynamical behaviour of a very general model of neuralnetworks with random asymmetric synaptic weights is investigated in the presence of random thresholds. Using mean-field equations, the bifurcations of the fixed points and the change of regime when varying control parameters are established. Different areas with various regimes are defined in the parameter space. Chaos arises generically by a quasi-periodicity route.
The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neuralnetworks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neuralnetworks are discussed with an emphasis on grammar learning.
Page's manifesto makes a case for localist representations in neuralnetworks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
Recent computer simulations of evolving neuralnetworks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.