Search results for '*Neural Networks' (try it on Scholar)

1000+ found
Sort by:
  1. Lothar Philipps & Giovanni Sartor (1999). Introduction: From Legal Theories to Neural Networks and Fuzzy Reasoning. [REVIEW] Artificial Intelligence and Law 7 (2-3):115-128.score: 90.0
    Computational approaches to the law have frequently been characterized as being formalistic implementations of the syllogistic model of legal cognition: using insufficient or contradictory data, making analogies, learning through examples and experiences, applying vague and imprecise standards. We argue that, on the contrary, studies on neural networks and fuzzy reasoning show how AI & law research can go beyond syllogism, and, in doing that, can provide substantial contributions to the law.
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  2. Daisuke Okamoto (2009). Social Relationship of a Firm and the Csp–Cfp Relationship in Japan: Using Artificial Neural Networks. [REVIEW] Journal of Business Ethics 87 (1):117 - 132.score: 90.0
    As a criterion of a good firm, a lucrative and growing business has been said to be important. Recently, however, high profitability and high growth potential are insufficient for the criteria, because social influences exerted by recent firms have been extremely significant. In this paper, high social relationship is added to the list of the criteria. Empirical corporate social performance versus corporate financial performance (CSP–CFP) relationship studies that consider social relationship are very limited in Japan, and there are no definite (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  3. Ulrich J. Frey & Hannes Rusch (2013). Using Artificial Neural Networks for the Analysis of Social-Ecological Systems. Ecology and Society 18 (2).score: 90.0
    The literature on common pool resource (CPR) governance lists numerous factors that influence whether a given CPR system achieves ecological long-term sustainability. Up to now there is no comprehensive model to integrate these factors or to explain success within or across cases and sectors. Difficulties include the absence of large-N-studies (Poteete 2008), the incomparability of single case studies, and the interdependence of factors (Agrawal and Chhatre 2006). We propose (1) a synthesis of 24 success factors based on the current SES (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  4. Dan Hunter (1999). Out of Their Minds: Legal Theory in Neural Networks. [REVIEW] Artificial Intelligence and Law 7 (2-3):129-151.score: 90.0
    This paper examines the use of connectionism (neural networks) in modelling legal reasoning. I discuss how the implementations of neural networks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neural networks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of neural networks (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  5. James L. McClelland (2013). Integrating Probabilistic Models of Perception and Interactive Neural Networks: A Historical and Tutorial Review. Frontiers in Psychology 4.score: 90.0
    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  6. Gualtiero Piccinini (2008). Some Neural Networks Compute, Others Don't. Neural Networks 21 (2-3):311-321.score: 89.0
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  7. Thomas R. Shultz & Alan C. Bale (2006). Neural Networks Discover a Near-Identity Relation to Distinguish Simple Syntactic Forms. Minds and Machines 16 (2):107-139.score: 88.0
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  8. Paul Thagard & Terrence C. Stewart (2011). The AHA! Experience: Creativity Through Emergent Binding in Neural Networks. Cognitive Science 35 (1):1-33.score: 78.0
    Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural activity that can support (...)
    Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  9. Daniel A. Pollen (2003). Explicit Neural Representations, Recursive Neural Networks and Conscious Visual Perception. Cerebral Cortex 13 (8):807-814.score: 75.0
  10. Robert T. Pennock (2000). Can Darwinian Mechanisms Make Novel Discoveries?: Learning From Discoveries Made by Evolving Neural Networks. [REVIEW] Foundations of Science 5 (2):225-238.score: 75.0
    Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model of (...)
    Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  11. Qihui Duan, Ju H. Park & Zheng-Guang Wu (forthcoming). Exponential State Estimator Design for Discrete-Time Neural Networks with Discrete and Distributed Time-Varying Delays. Complexity:n/a-n/a.score: 75.0
    No categories
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  12. C. Monterola, R. M. Roxas & S. Carreon‐Monterola (2009). Characterizing the Effect of Seating Arrangement on Classroom Learning Using Neural Networks. Complexity 14 (4):26-33.score: 75.0
    No categories
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  13. R. Rakkiyappan, A. Chandrasekar, S. Laksmanan & Ju H. Park (2013). State Estimation of Memristor‐Based Recurrent Neural Networks with Time‐Varying Delays Based on Passivity Theory. Complexity 19 (4):32-43.score: 75.0
    No categories
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  14. Christina Stoica‐Klüver & Jürgen Klüver (2007). Interacting Neural Networks and the Emergence of Social Structure. Complexity 12 (3):41-52.score: 75.0
    No categories
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  15. Pete Mandik (2003). Varieties of Representation in Evolved and Embodied Neural Networks. Biology and Philosophy 18 (1):95-130.score: 72.0
    In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind and (...)
    Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  16. Steve Donaldson (2008). A Neural Network for Creative Serial Order Cognitive Behavior. Minds and Machines 18 (1):53-91.score: 66.0
    If artificial neural networks are ever to form the foundation for higher level cognitive behaviors in machines or to realize their full potential as explanatory devices for human cognition, they must show signs of autonomy, multifunction operation, and intersystem integration that are absent in most existing models. This model begins to address these issues by integrating predictive learning, sequence interleaving, and sequence creation components to simulate a spectrum of higher-order cognitive behaviors which have eluded the grasp of simpler systems. (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  17. Stan Franklin & Max Garzon (1992). On Stability and Solvability (or, When Does a Neural Network Solve a Problem?). Minds and Machines 2 (1):71-83.score: 66.0
    The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their solutions are given. (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  18. Marko Puljic & Robert Kozma (2005). Activation Clustering in Neural and Social Networks. Complexity 10 (4):42-50.score: 66.0
    No categories
    Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  19. A. Dev, S. S. Agrawal & D. R. Choudhury (2003). Categorization of Hindi Phonemes by Neural Networks. AI and Society 17 (3-4):375-382.score: 66.0
    The prime objective of this paper is to conduct phoneme categorization experiments for Indian languages. In this direction a major effort has been made to categorize Hindi phonemes using a time delay neural network (TDNN), and compare the recognition scores with other languages. A total of six neural nets aimed at the major coarse of phonetic classes in Hindi were trained. Evaluation of each net on 350 training tokens and 40 test tokens revealed a 99% recognition rate for vowel classes, (...)
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  20. M. Arbib (ed.) (2002). The Handbook of Brain Theory and Neural Networks. MIT Press.score: 63.0
    In hundreds of articles by experts from around the world, and in overviews and " road maps" prepared by the editor, "The Handbook of Brain Theory and Neural ...
    Direct download  
     
    My bibliography  
     
    Export citation  
  21. Donald Borrett, Sean D. Kelly & Hon Kwan (2000). Phenomenology, Dynamical Neural Networks and Brain Function. Philosophical Psychology 13 (2):213-228.score: 60.0
    Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible foundation from which (...)
    Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  22. Hannes Leitgeb (2005). Interpreted Dynamical Systems and Qualitative Laws: From Neural Networks to Evolutionary Systems. Synthese 146 (1-2):189 - 202.score: 60.0
    . Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  23. John G. Taylor (1997). Neural Networks for Consciousness. Neural Networks 10:1207-27.score: 60.0
  24. Paul M. Churchland (1997). To Transform the Phenomena: Feyerabend, Proliferation, and Recurrent Neural Networks. Philosophy of Science 64 (4):420.score: 60.0
    Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neural networks with a recurrent (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  25. Helge Malmgren, Artificial Neural Networks in Medicine and Biology.score: 60.0
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use also leads (...)
    No categories
    Translate to English
    | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  26. Adam Barrett & Harald Atmanspacher, Stability Criteria for the Contextual Emergence of Macrostates in Neural Networks.score: 60.0
    More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic..
    No categories
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  27. Dan Lloyd (1998). The Fables of Lucy R.: Association and Dissociation in Neural Networks. In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press. 248--273.score: 60.0
    According to Aristotle, "to be learning something is the greatest of pleasures not only to the philosopher but also to the rest of mankind," (Poetics 1448b). But even as he affirms the unbounded human capacity for integrating new experience with existing knowledge, he alludes to a significant exception: "The sight of certain things gives us pain, but we enjoy looking at the most exact images of them, whether the forms of animals which we greatly despise or of corpses." Our capacity (...)
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  28. Jürgen Hollatz (1999). Analogy Making in Legal Reasoning with Neural Networks and Fuzzy Logic. Artificial Intelligence and Law 7 (2-3):289-301.score: 60.0
    Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neural networks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted as (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  29. Edmund T. Rolls (1997). Consciousness in Neural Networks? Neural Networks 10:1227-1303.score: 60.0
  30. Reinhard Blutner (2004). Nonmonotonic Inferences and Neural Networks. Synthese 142 (2):143 - 174.score: 60.0
    There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic (...)
    No categories
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  31. Michael Lamport Commons (2008). Stacked Neural Networks Must Emulate Evolution's Hierarchical Complexity. World Futures 64 (5 - 7):444 – 451.score: 60.0
    The missing ingredients in efforts to develop neural networks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neural networks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of androids of (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  32. Ingmar Visser (2000). Hidden Markov Model Interpretations of Neural Networks. Behavioral and Brain Sciences 23 (4):494-495.score: 60.0
    Page's manifesto makes a case for localist representations in neural networks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  33. Enrico Blanzieri (1997). Dynamical Learning Algorithms for Neural Networks and Neural Constructivism. Behavioral and Brain Sciences 20 (4):559-559.score: 60.0
    The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  34. B. Doyon, B. Cessac, M. Quoy & M. Samuelides (1995). Mean-Field Equations, Bifurcation Map and Chaos in Discrete Time, Continuous State, Random Neural Networks. Acta Biotheoretica 43 (1-2).score: 60.0
    The dynamical behaviour of a very general model of neural networks with random asymmetric synaptic weights is investigated in the presence of random thresholds. Using mean-field equations, the bifurcations of the fixed points and the change of regime when varying control parameters are established. Different areas with various regimes are defined in the parameter space. Chaos arises generically by a quasi-periodicity route.
    Direct download  
     
    My bibliography  
     
    Export citation  
  35. B. Doyon, B. Cessac, M. Quoy & M. Samuelides (1994). On Bifurcations and Chaos in Random Neural Networks. Acta Biotheoretica 42 (2-3).score: 60.0
    Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  36. Michael A. Arbib (ed.) (2002). The Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press.score: 60.0
    A new, dramatically updated edition of the classic resource on the constantly evolving fields of brain theory and neural networks.
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  37. François Chapeau-Blondeau (1995). Information Processing in Neural Networks by Means of Controlled Dynamic Regimes. Acta Biotheoretica 43 (1-2).score: 60.0
    This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neural networks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support information processing abilities. I (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  38. Dan J. Stein Andjacques Ludik (1998). Neural Networks and Psychopathology: An Introduction. In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
  39. Michael A. Arbib (ed.) (1995). Handbook of Brain Theory and Neural Networks. MIT Press.score: 60.0
  40. David Barber (2002). Bayesian Methods for Supervised Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. Mit Press.score: 60.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  41. Frederic Aviolatt Daniel Cattani & Thierry Cornu (1996). Recognition of Meteorological Situations with Neural Networks. Esda 1996: Expert Systems and Ai; Neural Networks 7:41.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
  42. Hammerstrom Dan (2002). Digital VLSI Neural Networks. In The Handbook of Brain Theory and Neural Networks.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
  43. A. G. Guggisberg, S. S. Dalal, A. M. Findlay & S. S. Nagarajan (2006). High-Frequency Oscillations in Distributed Neural Networks Reveal the Dynamics of Human Decision Making. Frontiers in Human Neuroscience 1:14-14.score: 60.0
    We examine the relative timing of numerous brain regions involved in human decisions that are based on external criteria, learned information, personal preferences, or unconstrained internal considerations. Using magnetoencephalography (MEG) and advanced signal analysis techniques, we were able to non-invasively reconstruct oscillations of distributed neural networks in the high-gamma frequency band (60–150 Hz). The time course of the observed neural activity suggested that two-alternative forced choice tasks are processed in four overlapping stages: processing of sensory input, option evaluation, intention (...)
    Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  44. Ashraf A. Kassim & Bvkv Kumar (1995). Potential Fields and Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. Mit Press.score: 60.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  45. V. Kurková (2002). Neural Networks as Universal Approximators. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. Mit Press. 1180--1183.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
  46. Yann LeCun & Yoshua Bengio (1995). Pattern Recognition and Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. Mit Press. 22.score: 60.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  47. Jiming Liu & Oussama Khatib (2002). Practical Connection Between Potential Fields and Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. Mit Press.score: 60.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  48. J. Liu & O. Khatib (2002). Potential Fields and Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. Mit Press.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
  49. David Jc Mackay (1995). Bayesian Methods for Supervised Neural Networks. In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. Mit Press.score: 60.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  50. S. Nolfi & D. Parisi (2002). Evolution and Learning in Neural Networks. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. Mit Press. 2--415.score: 60.0
    No categories
     
    My bibliography  
     
    Export citation  
1 — 50 / 1000