Results for 'Biologically plausible spiking neural networks'

1000+ found
Order:
  1.  36
    Biologically Plausible, Human‐Scale Knowledge Representation.Eric Crawford, Matthew Gingerich & Chris Eliasmith - 2016 - Cognitive Science 40 (4):782-821.
    Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony, “mesh” binding, and conjunctive binding. Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  14
    Connecting Biological Detail With Neural Computation: Application to the Cerebellar Granule–Golgi Microcircuit.Andreas Stöckel, Terrence C. Stewart & Chris Eliasmith - 2021 - Topics in Cognitive Science 13 (3):515-533.
    We present techniques for integrating low‐level neurobiological constraints into high‐level, functional cognitive models. In particular, we use these techniques to construct a model of eyeblink conditioning in the cerebellum based on temporal representations in the recurrent Granule‐Golgi microcircuit.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  53
    Information integration based predictions about the conscious states of a spiking neural network.David Gamez - 2010 - Consciousness and Cognition 19 (1):294-310.
    This paper describes how Tononi’s information integration theory of consciousness was used to make detailed predictions about the distribution of phenomenal states in a spiking neural network. This network had approximately 18,000 neurons and 700,000 connections and it used models of emotion and imagination to control the eye movements of a virtual robot and avoid ‘negative’ stimuli. The first stage in the analysis was the development of a formal definition of Tononi’s theory of consciousness. The network was then (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  4.  19
    On the biological plausibility of grandmother cells: Implications for neural network theories in psychology and neuroscience.Jeffrey S. Bowers - 2009 - Psychological Review 116 (1):220-251.
    A fundamental claim associated with parallel distributed processing theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts, that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  5.  24
    Dynamic thresholds for controlling encoding and retrieval operations in localist (or distributed) neural networks: The need for biologically plausible implementations.Alan D. Pickering - 2000 - Behavioral and Brain Sciences 23 (4):488-489.
    A dynamic threshold, which controls the nature and course of learning, is a pivotal concept in Page's general localist framework. This commentary addresses various issues surrounding biologically plausible implementations for such thresholds. Relevant previous research is noted and the particular difficulties relating to the creation of so-called instance representations are highlighted. It is stressed that these issues also apply to distributed models.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  6.  34
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7. Toward biologically plausible artificial vision.Mason Westfall - 2023 - Behavioral and Brain Sciences 46:e290.
    Quilty-Dunn et al. argue that deep convolutional neural networks (DCNNs) optimized for image classification exemplify structural disanalogies to human vision. A different kind of artificial vision – found in reinforcement-learning agents navigating artificial three-dimensional environments – can be expected to be more human-like. Recent work suggests that language-like representations substantially improves these agents’ performance, lending some indirect support to the language-of-thought hypothesis (LoTH).
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  12
    Additional tests of Amit's attractor neural networks.Ralph E. Hoffman - 1995 - Behavioral and Brain Sciences 18 (4):634-635.
    Further tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  30
    On bifurcations and chaos in random neural networks.B. Doyon, B. Cessac, M. Quoy & M. Samuelides - 1994 - Acta Biotheoretica 42 (2-3):215-225.
    Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  30
    The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.Akihiro Eguchi, James B. Isbister, Nasir Ahmad & Simon Stringer - 2018 - Psychological Review 125 (4):545-571.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  57
    Neural networks, nativism, and the plausibility of constructivism.Steven R. Quartz - 1993 - Cognition 48 (3):223-242.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  12.  59
    Artificial Neural Networks in Medicine and Biology.Helge Malmgren - unknown
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13. Biological neural networks in invertebrate neuroethology and robotics.Randall D. Beer, Roy E. Ritzmann & Thomas McKenna - 1994 - Bioessays 16 (11):857.
     
    Export citation  
     
    Bookmark  
  14.  25
    Improving With Practice: A Neural Model of Mathematical Development.Sean Aubin, Aaron R. Voelker & Chris Eliasmith - 2016 - Topics in Cognitive Science 9 (1):6-20.
    The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  96
    Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2017 - Frontiers in Psychology 8.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  15
    A Biologically Inspired Neural Network Model to Gain Insight Into the Mechanisms of Post-Traumatic Stress Disorder and Eye Movement Desensitization and Reprocessing Therapy.Andrea Mattera, Alessia Cavallo, Giovanni Granato, Gianluca Baldassarre & Marco Pagani - 2022 - Frontiers in Psychology 13.
    Eye movement desensitization and reprocessing therapy is a well-established therapeutic method to treat post-traumatic stress disorder. However, how EMDR exerts its therapeutic action has been studied in many types of research but still needs to be completely understood. This is in part due to limited knowledge of the neurobiological mechanisms underlying EMDR, and in part to our incomplete understanding of PTSD. In order to model PTSD, we used a biologically inspired computational model based on firing rate units, encompassing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  25
    Biologically applied neural networks may foster the coevolution of neurobiology and Cognitive psychology.Bill Baird - 1987 - Behavioral and Brain Sciences 10 (3):436-437.
  18.  35
    A Neural Network Framework for Cognitive Bias.Johan E. Korteling, Anne-Marie Brouwer & Alexander Toet - 2018 - Frontiers in Psychology 9:358644.
    Human decision making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a (...) network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. In order to substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19. A Neural Model of Rule Generation in Inductive Reasoning.Daniel Rasmussen & Chris Eliasmith - 2011 - Topics in Cognitive Science 3 (1):140-153.
    Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  20.  17
    Corrigendum: Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2018 - Frontiers in Psychology 9.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  50
    Vector subtraction implemented neurally: A neurocomputational model of some sequential cognitive and conscious processes.John Bickle, Cindy Worley & Marica Bernstein - 2000 - Consciousness and Cognition 9 (1):117-144.
    Although great progress in neuroanatomy and physiology has occurred lately, we still cannot go directly to those levels to discover the neural mechanisms of higher cognition and consciousness. But we can use neurocomputational methods based on these details to push this project forward. Here we describe vector subtraction as an operation that computes sequential paths through high-dimensional vector spaces. Vector-space interpretations of network activity patterns are a fruitful resource in recent computational neuroscience. Vector subtraction also appears to be implemented (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):1-82.
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  33
    Handbook of Brain Theory and Neural Networks.Michael A. Arbib (ed.) - 1995 - MIT Press.
    Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networkscharts the immense progress made in recent years in many specific areas related to two great questions: How does the brain work? and How can we build intelligent machines? While many books have appeared on limited aspects of one subfield or another of brain theory and neural (...)
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  24.  89
    A Brief Review of Neural Networks Based Learning and Control and Their Applications for Robots.Yiming Jiang, Chenguang Yang, Jing Na, Guang Li, Yanan Li & Junpei Zhong - 2017 - Complexity:1-14.
    As an imitation of the biological nervous systems, neural networks, which have been characterized as powerful learning tools, are employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification, and patterns recognition. This article aims to bring a brief review of the state-of-the-art NNs for the complex nonlinear systems by summarizing recent progress of NNs in both theory and practical applications. Specifically, this survey also reviews a number of NN based robot (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  25.  6
    Neural networks need real-world behavior.Aedan Y. Li & Marieke Mur - 2023 - Behavioral and Brain Sciences 46:e398.
    Bowers et al. propose to use controlled behavioral experiments when evaluating deep neural networks as models of biological vision. We agree with the sentiment and draw parallels to the notion that “neuroscience needs behavior.” As a promising path forward, we suggest complementing image recognition tasks with increasingly realistic and well-controlled task environments that engage real-world object recognition behavior.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):e82 1-17..
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory [19] decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  31
    The brain, the artificial neural network and the snake: why we see what we see.Carloalberto Treccani - forthcoming - AI and Society:1-9.
    For millions of years, biological creatures have dealt with the world without being able to see it; however, the change in the atmospheric condition during the Cambrian period and the subsequent increase of light, triggered the sudden evolution of vision and the consequent evolutionary benefits. Nevertheless, how from simple organisms to more complex animals have been able to generate meaning from the light who fell in their eyes and successfully engage the visual world remains unknown. As shown by many psychophysical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  27
    Localist representations are a desirable emergent property of neurologically plausible neural networks.Colin Martindale - 2000 - Behavioral and Brain Sciences 23 (4):485-486.
    Page has done connectionist researchers a valuable service in this target article. He points out that connectionist models using localized representations often work as well or better than models using distributed representations. I point out that models using distributed representations are difficult to understand and often lack parsimony and plausibility. In conclusion, I give an example – the case of the missing fundamental in music – that can easily be explained by a model using localist representations but can be explained (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  29.  9
    A general account of selection: Biology, immunology, and behavior-Open Peer Commentary-A neural-network interpretation of selection in learning and behavior.D. L. Hull, R. E. Langman, S. S. Glenn & J. E. Burgos - 2001 - Behavioral and Brain Sciences 24 (3):531-532.
    In their account of learning and behavior, the authors define an interactor as emitted behavior that operates on the environment, which excludes Pavlovian learning. A unified neural-network account of the operant-Pavlovian dichotomy favors interpreting neurons as interactors and synaptic efficacies as replicators. The latter interpretation implies that single-synapse change is inherently Lamarckian.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Intelligent Computing in Bioinformatics-Genetic Algorithm and Neural Network Based Classification in Microarray Data Analysis with Biological Validity Assessment.Vitoantonio Bevilacqua, Giuseppe Mastronardi & Filippo Menolascina - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4115--475.
     
    Export citation  
     
    Bookmark  
  31. Neurobiological Modeling and Analysis-An Electromechanical Neural Network Robotic Model of the Human Body and Brain: Sensory-Motor Control by Reverse Engineering Biological Somatic Sensors.Alan Rosen & David B. Rosen - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4232--105.
  32.  7
    Front Waves of Chemical Reactions and Travelling Waves of Neural Activity.Yidi Zhang, Shan Guo, Mingzhu Sun, Lucio Mariniello, Arturo Tozzi & Xin Zhao - 2022 - Journal of Neurophilosophy 1 (2).
    Travelling waves crossing the nervous networks at mesoscopic/macroscopic scales have been correlated with different brain functions, from long-term memory to visual stimuli. Here we investigate a feasible relationship between wave generation/propagation in recurrent nervous networks and a physical/chemical model, namely the Belousov–Zhabotinsky reaction. Since BZ’s nonlinear, chaotic chemical process generates concentric/intersecting waves that closely resemble the diffusive nonlinear/chaotic oscillatory patterns crossing the nervous tissue, we aimed to investigate whether wave propagation of brain oscillations could be described in terms (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  54
    A solution to the tag-assignment problem for neural networks.Gary W. Strong & Bruce A. Whitehead - 1989 - Behavioral and Brain Sciences 12 (3):381-397.
    Purely parallel neural networks can model object recognition in brief displays – the same conditions under which illusory conjunctions have been demonstrated empirically. Correcting errors of illusory conjunction is the “tag-assignment” problem for a purely parallel processor: the problem of assigning a spatial tag to nonspatial features, feature combinations, and objects. This problem must be solved to model human object recognition over a longer time scale. Our model simulates both the parallel processes that may underlie illusory conjunctions and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   175 citations  
  34.  74
    Even deeper problems with neural network models of language.Thomas G. Bever, Noam Chomsky, Sandiway Fong & Massimo Piattelli-Palmarini - 2023 - Behavioral and Brain Sciences 46:e387.
    We recognize today's deep neural network (DNN) models of language behaviors as engineering achievements. However, what we know intuitively and scientifically about language shows that what DNNs are and how they are trained on bare texts, makes them poor models of mind and brain for language organization, as it interacts with infant biology, maturation, experience, unique principles, and natural law.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  99
    Phenomenology, dynamical neural networks and brain function.Donald Borrett, Sean D. Kelly & Hon Kwan - 2000 - Philosophical Psychology 13 (2):213-228.
    Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible foundation from which (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36.  91
    Cultural Exaptation and Cultural Neural Reuse: A Mechanism for the Emergence of Modern Culture and Behavior.Francesco D’Errico & Ivan Colagè - 2018 - Biological Theory 13 (4):213-227.
    On the basis of recent advancements in both neuroscience and archaeology, we propose a plausible biocultural mechanism at the basis of cultural evolution. The proposed mechanism, which relies on the notions of cultural exaptation and cultural neural reuse, may account for the asynchronous, discontinuous, and patchy emergence of innovations around the globe. Cultural exaptation refers to the reuse of previously devised cultural features for new purposes. Cultural neural reuse refers to cases in which exposure to cultural practices (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  37. Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map.Birgitta Dresp-Langley - 2021 - Symmetry 13:299.
    Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  25
    The Handbook of Brain Theory and Neural Networks.Michael A. Arbib (ed.) - 1998 - MIT Press.
    Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networks charts the immense progress made in recent years in many specific areas related to great questions: How does the brain work? How can we build intelligent machines? While many books discuss limited aspects of one subfield or another of brain theory and neural networks, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  39.  41
    Dynamical learning algorithms for neural networks and neural constructivism.Enrico Blanzieri - 1997 - Behavioral and Brain Sciences 20 (4):559-559.
    The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  40.  16
    Emergent Quantumness in Neural Networks.Mikhail I. Katsnelson & Vitaly Vanchurin - 2021 - Foundations of Physics 51 (5):1-20.
    It was recently shown that the Madelung equations, that is, a hydrodynamic form of the Schrödinger equation, can be derived from a canonical ensemble of neural networks where the quantum phase was identified with the free energy of hidden variables. We consider instead a grand canonical ensemble of neural networks, by allowing an exchange of neurons with an auxiliary subsystem, to show that the free energy must also be multivalued. By imposing the multivaluedness condition on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41.  18
    Adaptive Orthogonal Characteristics of Bio-Inspired Neural Networks.Naohiro Ishii, Toshinori Deguchi, Masashi Kawaguchi, Hiroshi Sasaki & Tokuro Matsuo - 2022 - Logic Journal of the IGPL 30 (4):578-598.
    In recent years, neural networks have attracted much attention in the machine learning and the deep learning technologies. Bio-inspired functions and intelligence are also expected to process efficiently and improve existing technologies. In the visual pathway, the prominent features consist of nonlinear characteristics of squaring and rectification functions observed in the retinal and visual cortex networks, respectively. Further, adaptation is an important feature to activate the biological systems, efficiently. Recently, to overcome short-comings of the deep learning techniques, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  20
    Can robots learn like insects, can neurobiologists learn from robots? Biological Neural Networks in Invertebrate Neuroethology and Robotics(1993). Edited by R ANDALL D B EER, R OY E. R ITZMANN and T HOMAS M CKENNA. Academic Press, pp. xi+417. £48.00. ISBN 0‐12‐084728‐0. [REVIEW]W. J. Heitler - 1994 - Bioessays 16 (11):858-859.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Varieties of representation in evolved and embodied neural networks.Pete Mandik - 2003 - Biology and Philosophy 18 (1):95-130.
    In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44.  10
    Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision.Keerthi S. Chandran, Amrita Mukherjee Paul, Avijit Paul & Kuntal Ghosh - 2023 - Behavioral and Brain Sciences 46:e388.
    Psychologically faithful deep neural networks (DNNs) could be constructed by training with psychophysics data. Moreover, conventional DNNs are mostly monocular vision based, whereas the human brain relies mainly on binocular vision. DNNs developed as smaller vision agent networks associated with fundamental and less intelligent visual activities, can be combined to simulate more intelligent visual activities done by the biological brain.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  53
    Estimation and application of matrix eigenvalues based on deep neural network.Zhiying Hu - 2022 - Journal of Intelligent Systems 31 (1):1246-1261.
    In today’s era of rapid development in science and technology, the development of digital technology has increasingly higher requirements for data processing functions. The matrix signal commonly used in engineering applications also puts forward higher requirements for processing speed. The eigenvalues of the matrix represent many characteristics of the matrix. Its mathematical meaning represents the expansion of the inherent vector, and its physical meaning represents the spectrum of vibration. The eigenvalue of a matrix is the focus of matrix theory. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46.  54
    A coupled attractor model of the rodent head direction system.Adam Elga - unknown
    Head direction (HD) cells, abundant in the rat postsubiculum and anterior thalamic nuclei, fire maximally when the rat’s head is facing a particular direction. The activity of a population of these cells forms a distributed representation of the animal’s current heading. We describe a neural network model that creates a stable, distributed representation of head direction and updates that representation in response to angular velocity information. In contrast to earlier models, our model of the head direction system accurately tracks (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  10
    Forecast Model of TV Show Rating Based on Convolutional Neural Network.Lingfeng Wang - 2021 - Complexity 2021:1-10.
    The TV show rating analysis and prediction system can collect and transmit information more quickly and quickly upload the information to the database. The convolutional neural network is a multilayer neural network structure that simulates the operating mechanism of biological vision systems. It is a neural network composed of multiple convolutional layers and downsampling layers sequentially connected. It can obtain useful feature descriptions from original data and is an effective method to extract features from data. At present, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  23
    Psychic systems and metaphysical machines: experiencing behavioural prediction with neural networks.Max B. Kazemzadeh - 2010 - Technoetic Arts 8 (2):189-198.
    We are living in a time of meta-organics and post-biology, where we perceive everything in our world as customizable and changeable. Modelling biology within a technological context allows us to investigate GEO-volutionary alternatives/alterations to our original natural systems, where augmentation and transmutation become standards in search of overall betterment (Genetically Engineered Organics). Our expectations for technology exceeds ubiquitous access and functional perfection and enters the world of technoetics, where our present hyper-functional, immersively multi-apped, borderline-prosthetic, global village devices fail to satiate (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  49.  24
    Epigenetics and Bruxism: from Hyper-Narrative Neural Networks to Hyper-Function.Aleksandra Čalić & Eva Vrtačič - 2020 - Biosemiotics 13 (2):241-259.
    This article develops a biosemiotic ´hyper-narrative model´ for the purposes of investigating emergent motor behaviors. It proposes to understand such behaviors in terms of the following associations: the organization of information acquired from the environment, focusing on narrative; the organizational dynamics of epigenetic mechanisms that underly the neural processes facilitating the processing of information; and the evolution of emergent motor behaviors that enable the informational acquisition. The article describes and explains these associations as part of a multi-ordered and multi-causal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  13
    Extrapolating a Hierarchy of Building Block Systems Towards Future Neural Network Organisms.Gerard Jagers op Akkerhuis - 2001 - Acta Biotheoretica 49 (3):171-189.
    It is possible to predict future life forms? In this paper it is argued that the answer to this question may well be positive. As a basis for predictions a rationale is used that is derived from historical data, e.g. from a hierarchical classification that ranks all building block systems, that have evolved so far. This classification is based on specific emergent properties that allow stepwise transitions, from low level building blocks to higher level ones. This paper shows how this (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 1000