Search results for '*Neural Networks' (try it on Scholar)

1000+ found
Order:
  1.  28
    Lothar Philipps & Giovanni Sartor (1999). Introduction: From Legal Theories to Neural Networks and Fuzzy Reasoning. [REVIEW] Artificial Intelligence and Law 7 (2-3):115-128.
    Computational approaches to the law have frequently been characterized as being formalistic implementations of the syllogistic model of legal cognition: using insufficient or contradictory data, making analogies, learning through examples and experiences, applying vague and imprecise standards. We argue that, on the contrary, studies on neural networks and fuzzy reasoning show how AI & law research can go beyond syllogism, and, in doing that, can provide substantial contributions to the law.
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  2.  24
    Dan Hunter (1999). Out of Their Minds: Legal Theory in Neural Networks. [REVIEW] Artificial Intelligence and Law 7 (2-3):129-151.
    This paper examines the use of connectionism (neural networks) in modelling legal reasoning. I discuss how the implementations of neural networks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neural networks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of neural networks (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  3.  21
    Ulrich J. Frey & Hannes Rusch (2013). Using Artificial Neural Networks for the Analysis of Social-Ecological Systems. Ecology and Society 18 (2).
    The literature on common pool resource (CPR) governance lists numerous factors that influence whether a given CPR system achieves ecological long-term sustainability. Up to now there is no comprehensive model to integrate these factors or to explain success within or across cases and sectors. Difficulties include the absence of large-N-studies (Poteete 2008), the incomparability of single case studies, and the interdependence of factors (Agrawal and Chhatre 2006). We propose (1) a synthesis of 24 success factors based on the current SES (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  4.  9
    Anna Pérez-Méndez, Elizabeth Torres-Rivas, Francklin Rivas-Echeverría & Ronald Maldonado-Rodríguez (2005). A Methodological Approach for Pattern Recognition System Using Discriminant Analysis and Artificial Neural Networks. Cognitive Science 13 (14):15.
    In this work it is presented a methodology for the development of a pattern recognition system using classification methods as discriminant analysis and artificial neural networks. In this methodology, the statistical analysis is contemplated, with the purpose of retaining the observations and the important characteristics that can produce an appropriate classification, and allows, as well, to detect outliers’ observations, multicolinearity between variables, among other things. Chlorophyll a fluorescence OJIP signals measured from Pisum sativum leaves belonging to different drought stress (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  5.  17
    Daisuke Okamoto (2009). Social Relationship of a Firm and the Csp–Cfp Relationship in Japan: Using Artificial Neural Networks. [REVIEW] Journal of Business Ethics 87 (1):117 - 132.
    As a criterion of a good firm, a lucrative and growing business has been said to be important. Recently, however, high profitability and high growth potential are insufficient for the criteria, because social influences exerted by recent firms have been extremely significant. In this paper, high social relationship is added to the list of the criteria. Empirical corporate social performance versus corporate financial performance (CSP–CFP) relationship studies that consider social relationship are very limited in Japan, and there are no definite (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  6.  7
    Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi (2016). Learning Orthographic Structure With Sequential Generative Neural Networks. Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  7.  24
    Thomas R. Shultz & Alan C. Bale (2006). Neural Networks Discover a Near-Identity Relation to Distinguish Simple Syntactic Forms. Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  8.  5
    Qihui Duan, Ju H. Park & Zheng-Guang Wu (2014). Exponential State Estimator Design for Discrete-Time Neural Networks with Discrete and Distributed Time-Varying Delays. Complexity 20 (1):38-48.
  9. Paul Thagard & Terrence C. Stewart (2011). The AHA! Experience: Creativity Through Emergent Binding in Neural Networks. Cognitive Science 35 (1):1-33.
    Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural activity that can support (...)
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   14 citations  
  10. Gualtiero Piccinini (2008). Some Neural Networks Compute, Others Don't. Neural Networks 21 (2-3):311-321.
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   9 citations  
  11.  40
    Michael A. Arbib (ed.) (2002). The Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press.
    A new, dramatically updated edition of the classic resource on the constantly evolving fields of brain theory and neural networks.
    Direct download  
     
    Export citation  
     
    My bibliography   4 citations  
  12.  12
    Daniel A. Pollen (2003). Explicit Neural Representations, Recursive Neural Networks and Conscious Visual Perception. Cerebral Cortex 13 (8):807-814.
  13.  14
    Robert T. Pennock (2000). Can Darwinian Mechanisms Make Novel Discoveries?: Learning From Discoveries Made by Evolving Neural Networks. [REVIEW] Foundations of Science 5 (2):225-238.
    Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model of (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  14.  5
    Christina Stoica‐Klüver & Jürgen Klüver (2007). Interacting Neural Networks and the Emergence of Social Structure. Complexity 12 (3):41-52.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  15.  4
    R. Rakkiyappan, A. Chandrasekar, S. Laksmanan & Ju H. Park (2013). State Estimation of Memristor‐Based Recurrent Neural Networks with Time‐Varying Delays Based on Passivity Theory. Complexity 19 (4):32-43.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  16.  1
    C. Monterola, R. M. Roxas & S. Carreon‐Monterola (2009). Characterizing the Effect of Seating Arrangement on Classroom Learning Using Neural Networks. Complexity 14 (4):26-33.
  17. Pete Mandik (2003). Varieties of Representation in Evolved and Embodied Neural Networks. Biology and Philosophy 18 (1):95-130.
    In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind and (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  18.  2
    Gary W. Strong & Bruce A. Whitehead (1989). A Solution to the Tag-Assignment Problem for Neural Networks. Behavioral and Brain Sciences 12 (3):381.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   162 citations  
  19. Michael A. Arbib (ed.) (1995). Handbook of Brain Theory and Neural Networks. MIT Press.
  20.  52
    Hannes Leitgeb (2005). Interpreted Dynamical Systems and Qualitative Laws: From Neural Networks to Evolutionary Systems. Synthese 146 (1-2):189 - 202.
    . Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  21.  41
    Paul M. Churchland (1997). To Transform the Phenomena: Feyerabend, Proliferation, and Recurrent Neural Networks. Philosophy of Science 64 (4):420.
    Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neural networks with a recurrent (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  22.  20
    Marko Puljic & Robert Kozma (2005). Activation Clustering in Neural and Social Networks. Complexity 10 (4):42-50.
  23.  5
    Feraz Azhar (2016). Polytopes as Vehicles of Informational Content in Feedforward Neural Networks. Philosophical Psychology 29 (5):697-716.
    Localizing content in neural networks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neural networks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification and provide (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  24.  5
    Feraz Azhar, Polytopes as Vehicles of Informational Content in Feedforward Neural Networks.
    Localizing content in neural networks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neural networks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification, and provide (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  25.  28
    John G. Taylor (1997). Neural Networks for Consciousness. Neural Networks 10:1207-27.
  26.  47
    Jürgen Hollatz (1999). Analogy Making in Legal Reasoning with Neural Networks and Fuzzy Logic. Artificial Intelligence and Law 7 (2-3):289-301.
    Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neural networks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted as (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  27.  27
    Reinhard Blutner (2004). Nonmonotonic Inferences and Neural Networks. Synthese 142 (2):143 - 174.
    There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  28.  14
    David Barber (2002). Bayesian Methods for Supervised Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press
    Direct download  
     
    Export citation  
     
    My bibliography  
  29.  12
    Sbg Park (1998). Neural Networks and Psychopharmacology. In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press 57.
  30.  13
    Yann LeCun & Yoshua Bengio (1995). Pattern Recognition and Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. MIT Press 22.
    Direct download  
     
    Export citation  
     
    My bibliography  
  31.  39
    Helge Malmgren, Artificial Neural Networks in Medicine and Biology.
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use also leads (...)
    No categories
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  32.  47
    Michael Lamport Commons (2008). Stacked Neural Networks Must Emulate Evolution's Hierarchical Complexity. World Futures 64 (5 - 7):444 – 451.
    The missing ingredients in efforts to develop neural networks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neural networks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of androids of (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  33.  10
    Ricard V. Solé & Jordi Delgado (1996). Universal Computation in Fluid Neural Networks. Complexity 2 (2):49-56.
    Fluid neural networks can be used as a theoretical framework for a wide range of complex systems as social insects. In this article we show that collective logical gates can be built in such a way that complex computation can be possible by means of the interplay between local interactions and the collective creation of a global field. This is exemplified by a NOR gate. Some general implications for ant societies are outlined. ©.
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  34.  11
    B. Doyon, B. Cessac, M. Quoy & M. Samuelides (1994). On Bifurcations and Chaos in Random Neural Networks. Acta Biotheoretica 42 (2-3):215-225.
    Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  35. Adam Barrett & Harald Atmanspacher, Stability Criteria for the Contextual Emergence of Macrostates in Neural Networks.
    More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic..
    Translate
     
     
    Export citation  
     
    My bibliography  
  36.  7
    Jiming Liu & Oussama Khatib (2002). Practical Connection Between Potential Fields and Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press
    Direct download  
     
    Export citation  
     
    My bibliography  
  37.  14
    François Chapeau-Blondeau (1995). Information Processing in Neural Networks by Means of Controlled Dynamic Regimes. Acta Biotheoretica 43 (1-2):155-167.
    This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neural networks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support information processing abilities. I (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  38.  12
    B. Doyon, B. Cessac, M. Quoy & M. Samuelides (1995). Mean-Field Equations, Bifurcation Map and Chaos in Discrete Time, Continuous State, Random Neural Networks. Acta Biotheoretica 43 (1-2):169-175.
    The dynamical behaviour of a very general model of neural networks with random asymmetric synaptic weights is investigated in the presence of random thresholds. Using mean-field equations, the bifurcations of the fixed points and the change of regime when varying control parameters are established. Different areas with various regimes are defined in the parameter space. Chaos arises generically by a quasi-periodicity route.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  39.  14
    Edmund T. Rolls (1997). Consciousness in Neural Networks? Neural Networks 10:1227-1303.
  40.  10
    Enrico Blanzieri (1997). Dynamical Learning Algorithms for Neural Networks and Neural Constructivism. Behavioral and Brain Sciences 20 (4):559-559.
    The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  41.  3
    James A. Reggia & Alexander Grushin (2005). Population Lateralization Arises in Simulated Evolution of Non-Interacting Neural Networks. Behavioral and Brain Sciences 28 (4):609-611.
    Recent computer simulations of evolving neural networks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  42. Ashraf A. Kassim & Bvkv Kumar (1995). Potential Fields and Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. MIT Press
     
    Export citation  
     
    My bibliography  
  43. David Jc Mackay (1995). Bayesian Methods for Supervised Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. MIT Press
     
    Export citation  
     
    My bibliography  
  44.  8
    Ingmar Visser (2000). Hidden Markov Model Interpretations of Neural Networks. Behavioral and Brain Sciences 23 (4):494-495.
    Page's manifesto makes a case for localist representations in neural networks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  45. Dan J. Stein Andjacques Ludik (1998). Neural Networks and Psychopathology: An Introduction. In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press
     
    Export citation  
     
    My bibliography  
  46. Frederic Aviolatt Daniel Cattani & Thierry Cornu (1996). Recognition of Meteorological Situations with Neural Networks. Esda 1996: Expert Systems and Ai; Neural Networks 7:41.
     
    Export citation  
     
    My bibliography  
  47. Hammerstrom Dan (2002). Digital VLSI Neural Networks. In The Handbook of Brain Theory and Neural Networks.
     
    Export citation  
     
    My bibliography  
  48. V. Kurková (2002). Neural Networks as Universal Approximators. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press 1180--1183.
     
    Export citation  
     
    My bibliography  
  49. J. Liu & O. Khatib (2002). Potential Fields and Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press
     
    Export citation  
     
    My bibliography  
  50.  32
    Dan Lloyd (1998). The Fables of Lucy R.: Association and Dissociation in Neural Networks. In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press 248--273.
    According to Aristotle, "to be learning something is the greatest of pleasures not only to the philosopher but also to the rest of mankind," (Poetics 1448b). But even as he affirms the unbounded human capacity for integrating new experience with existing knowledge, he alludes to a significant exception: "The sight of certain things gives us pain, but we enjoy looking at the most exact images of them, whether the forms of animals which we greatly despise or of corpses." Our capacity (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
1 — 50 / 1000