The literature on common pool resource (CPR) governance lists numerous factors that influence whether a given CPR system achieves ecological long-term sustainability. Up to now there is no comprehensive model to integrate these factors or to explain success within or across cases and sectors. Difficulties include the absence of large-N-studies (Poteete 2008), the incomparability of single case studies, and the interdependence of factors (Agrawal and Chhatre 2006). We propose (1) a synthesis of 24 success factors based on the current SES (...) framework and a literature review; (2) the application of neural networks on a database of CPR management case studies in an attempt to test the viability of this synthesis. This method allows us to obtain an implicit quantitative and rather precise model of the interdependencies in CPR systems. Given such a model, every success factor in each case can be manipulated separately, yielding different predictions for success. This could become be a fast and inexpensive way to analyze, predict and optimize performance for communities world-wide facing CPR challenges. Existing theoretical frameworks could be improved as well. (shrink)
I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may (...) well fall into this last class. (shrink)
In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind and (...) anorganism. My approach in the current paper isto explore this and other key themes inneurosemantics through the use of computermodels of neural networks embodied and evolvedin virtual organisms. The models allow for thelaying bare of the causal economies of entireyet simple artificial organisms so that therelations between the neural bases of, forinstance, representation in perception andmemory can be regarded in the context of anentire organism. On the basis of thesesimulations, I argue for an account ofneurosemantics adequate for the solution of theeconomy problem. (shrink)
Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible foundation from which (...) predicative structures may arise as an emergent phenomenon without the positing of a representing subject. These models go some way in addressing the dual constraints of phenomenological accuracy and neurophysiological plausibility that ought to guide all projects devoted to discovering the physical basis of human experience. (shrink)
. Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary (...) systems are instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system. (shrink)
Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neural networks with a recurrent (...) architecture. (shrink)
Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use also leads (...) us into the problems of induction and probability. Ever since David Hume expressed his famous doubts about induction, the principles of scientific inference have been a central concern for philosophers. (shrink)
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic..
Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neural networks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted as (...) an analogy making process. Because it is not the same as reasoning in mathematics or the physical sciences, it is necessary to use a method, which incorporates first the ability to specify likelihood and second the opportunity of including known court decisions. We use for modelling the analogy making process in legal reasoning neural networks and fuzzy systems. In the first part of the paper a neural network is described to identify precedents of immaterial damages. The second application presents a fuzzy system for determining the required waiting period after traffic accidents. Both examples demonstrate how to model reasoning in legal applications analogous to recent decisions: first, by learning a system with court decisions, and second, by analyzing, modelling and testing the decision making with a fuzzy system. (shrink)
Computational approaches to the law have frequently been characterized as being formalistic implementations of the syllogistic model of legal cognition: using insufficient or contradictory data, making analogies, learning through examples and experiences, applying vague and imprecise standards. We argue that, on the contrary, studies on neural networks and fuzzy reasoning show how AI & law research can go beyond syllogism, and, in doing that, can provide substantial contributions to the law.
As a criterion of a good firm, a lucrative and growing business has been said to be important. Recently, however, high profitability and high growth potential are insufficient for the criteria, because social influences exerted by recent firms have been extremely significant. In this paper, high social relationship is added to the list of the criteria. Empirical corporate social performance versus corporate financial performance (CSP–CFP) relationship studies that consider social relationship are very limited in Japan, and there are no definite (...) conclusions for the studies in the world, because of scant data and the inappropriate methods, especially for supporting linear hypothesis which these studies are based on. In this paper, the CSP–CFP relationship is analyzed by an artificial neural networks model, which can deal with a non-linear relationship, using 10-year follow-up survey data. (shrink)
According to Aristotle, "to be learning something is the greatest of pleasures not only to the philosopher but also to the rest of mankind," (Poetics 1448b). But even as he affirms the unbounded human capacity for integrating new experience with existing knowledge, he alludes to a significant exception: "The sight of certain things gives us pain, but we enjoy looking at the most exact images of them, whether the forms of animals which we greatly despise or of corpses." Our capacity (...) for learning is happily engaged in viewing representations of painful objects, but not, it seems, in viewing the objects themselves. When an experience is intensely painful, what then is a rational animal to do? We can neither disable our learning process, nor erase its traces. In the face of intense pain, horror, or terror, learning and remembrance cause no pleasure but rather persistent psychological pain and disruption. The memorious mind reverberates with trauma. (shrink)
There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic (...) inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas. (shrink)
The missing ingredients in efforts to develop neural networks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neural networks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of androids of (...) superior intelligence and physical composition. These raise future moral questions worthy of speculation. (shrink)
Page's manifesto makes a case for localist representations in neural networks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
The dynamical behaviour of a very general model of neural networks with random asymmetric synaptic weights is investigated in the presence of random thresholds. Using mean-field equations, the bifurcations of the fixed points and the change of regime when varying control parameters are established. Different areas with various regimes are defined in the parameter space. Chaos arises generically by a quasi-periodicity route.
This paper examines the use of connectionism (neural networks) in modelling legal reasoning. I discuss how the implementations of neural networks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neural networks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of neural networks (...) in law. It then examines some implementations undertaken in law and criticises their legal theoretical naïvete. It then presents a lessons from the implementations which researchers must bear in mind if they wish to build neural networks which are justified by legal theories. (shrink)
Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance (...) and on the slope of the transfer function, that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. The route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is. Our results suggest that such high-dimensional networks behave like low-dimensional dynamical systems. (shrink)
This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neural networks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support information processing abilities. I (...) present an example where controlled transient evolutions in a neural network, are used to model the regulation of motor activities by the cerebellar cortex. (shrink)
Recent computer simulations of evolving neural networks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that (...) they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results. (shrink)
Does connectionism spell doom for folk psychology? I examine the proposal that cognitive representational states such as beliefs can play no role if connectionist models - - interpreted as radical new cognitive theories -- take hold and replace other cognitive theories. Though I accept that connectionist theories are radical theories that shed light on cognition, I reject the conclusion that neural networks do not represent. Indeed, I argue that neural networks may actually give us a better working notion (...) of cognitive representational states such as beliefs, and in so doing give us a better understanding of how these states might be instantiated in neural wetware. (shrink)
Random simulation of complex dynamical systems is generally used in order to obtain information about their asymptotic behaviour (i.e., when time or size of the system tends towards infinity). A fortunate and welcome circumstance in most of the systems studied by physicists, biologists, and economists is the existence of an invariant measure in the state space allowing determination of the frequency with which observation of asymptotic states is possible. Regions found between contour lines of the surface density of this invariant (...) measure are called confiners. An example of such confiners is given for a formal neural network capable of learning. Finally, an application of this methodology is proposed in studying dependency of the network's invariant measure with regard to: 1) the mode of neurone updating (parallel or sequential), and 2) boundary conditions of the network (searching for phase transitions). (shrink)
Clahsen's theory raises problems that make it seem untenable. As an alternative, a constructivist neural network model is reported that develops a modular architecture and in which a single associative mechanism produces all inflections, displaying an emergent dissociation between regular and irregular verbs. Thus, Clahsen's rejection of associative models of inflection concerns only a subgroup of these models.
Page proposes a simple, localist, lateral inhibitory network for implementing a selection process that approximately conforms to the Luce choice rule. I describe another localist neural mechanism for selection in accordance with the Luce choice rule. The mechanism implements an independent race model. It consists of parallel, independent nerve fibers connected to a winner-take-all cluster, which records the winner of the race.
In order to benefit from the advantages of localist coding, neural models that feature winner-take-all representations at the top level of a network hierarchy must still solve the computational problems inherent in distributed representations at the lower levels.
The present study examined the neural substrate of two classes of quantifiers: numerical quantifiers like â at least threeâ which require magnitude processing, and logical quantifiers like â someâ which can be understood using a simple form of perceptual logic. We assessed these distinct classes of quantifiers with converging observations from two sources: functional imaging data from healthy adults, and behavioral and structural data from patients with corticobasal degeneration who have acalculia. Our findings are consistent with the claim that numerical (...) quantifier comprehension depends on a lateral parietal-dorsolateral prefrontal network, but logical quantifier comprehension depends instead on a rostral medial prefrontal-posterior cingulate network. These observations emphasize the important contribution of abstract number knowledge to the meaning of numerical quantifiers in semantic memory and the potential role of a logic-based evaluation in the service of non-numerical quantifiers. (shrink)
Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural activity that can support (...) cognitive and emotional processes underlying human creativity. (shrink)
Functional neuroimaging studies allow examination of the cerebral networks involved in human behavior. For pathological aggression, several studies have reported a involvement of frontal and temporal areas, reflecting disruption of emotional regulatory systems. Recent genetic studies that bring together reward system dysfunction and violent behavior.
& Functional brain imaging offers new opportunities for the begin with single-subject (preprocessed) scan series, and study of that most pervasive of cognitive conditions, human consider the patterns of all voxels as potential multivariate consciousness. Since consciousness is attendant to so much encodings of phenomenal information. Twenty-seven subjects of human cognitive life, its study requires secondary analysis from the four studies were analyzed with multivariate of multiple experimental datasets. Here, four preprocessed methods, revealing analogues of phenomenal structures, datasets from the (...) National fMRI Data Center are considered: particularly the structures of temporality. In a second Hazeltine et al., Neural activation during response competi- interpretive approach, artificial neural networks were used tion; Ishai et al., The representation of objects in the human to detect a more explicit prediction from phenomenology, occipital and temporal cortex; Mechelli et al., The effects of namely, that present experience contains and is inflected by presentation rate during word and pseudoword reading; and past states of awareness and anticipated events. In all of 21 Postle et al., Activity in human frontal cortex associated with subjects in this analysis, nets were successfully trained to spatial working memory and saccadic behavior. The study of extract aspects of relative past and future brain states, in consciousness also draws from multiple disciplines. In this comparison with statistically similar controls. This exploratory article, the philosophical subdiscipline of phenomenology study thus concludes that the proposed methods for provides initial characterization of phenomenal structures ‘‘neurophenomenology’’ warrant further application, includ- conceptually necessary for an analysis of consciousness. These ing the exploration of individual differences, multivariate structures include phenomenal intentionality, phenomenal differences between cognitive task conditions, and explora- superposition, and experienced temporality.. (shrink)
It is unlikely that the systematic, compositional properties of formal symbol systems -- i.e., of computation -- play no role at all in cognition. However, it is equally unlikely that cognition is just computation, because of the symbol grounding problem (Harnad 1990): The symbols in a symbol system are systematically interpretable, by external interpreters, as meaning something, and that is a remarkable and powerful property of symbol systems. Cognition (i.e., thinking), has this property too: Our thoughts are systematically interpretable by (...) external interpreters as meaning something. However, unlike symbols in symbol systems, thoughts mean what they mean autonomously: Their meaning does not consist of or depend on anyone making or being able to make any external interpretations of them at all. When I think "the cat is on the mat," the meaning of that thought is autonomous; it does not depend on YOUR being able to interpret it as meaning that (even though you could interpret it that way, and you would be right). (shrink)
Page has done connectionist researchers a valuable service in this target article. He points out that connectionist models using localized representations often work as well or better than models using distributed representations. I point out that models using distributed representations are difficult to understand and often lack parsimony and plausibility. In conclusion, I give an example – the case of the missing fundamental in music – that can easily be explained by a model using localist representations but can be explained (...) only with great difficulty and implausibility by a model using distributed representations. (shrink)
A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular physicality (...) is considered irrelevant (since they could be replaced by other physical substrata). (shrink)
This commentary focuses on how the large-scale cortical dynamics described in Nunez's target article are related to various phenomena at different scales, both spatial and temporal, in particular, how the brain dynamics measured with EEG could relate to (i) experience and mental state, (ii) neuromodulatory effects, and (iii) spontaneous firing and autogenerated electromagnetic effects.
Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model of (...) the interconnections of neurons in thebrain. I describe two cases that take the connectionist approach a stepfurther by using genetic algorithms, a form of evolutionary computationthat explicitly models Darwinian mechanisms. These cases show thatDarwinian mechanisms can make novel discoveries of complex, previouslyunknown patterns. With some caveats, they lend support to evolutionaryepistemology. (shrink)
Within the Hebbian paradigm the mechanism for integrating cell assemblies oscillating with different frequencies remains unclear. We hypothesize that such an integration may occur in cortical “interaction foci” that unite synchronously oscillated assemblies through hard-wired connections, synthesizing the information from various functional systems of the brain.
The Hebbian view of word representation is challenged by findings of task (level of processing)-dependent, event-related potential patterns that do not support the notion of a fixed set of neurons representing a given word. With cross-language phonological reliability encoding more asymmetrical left hemisphere activity is evoked than with word comprehension. This suggests a dynamical view of the brain as a self-organizing, connectivity-adjusting system.
The ability to group perceptual objects into functionally relevant categories is vital to our comprehension of the world. Such categorisation aids in how we search for objects in familiar scenes and how we identify an object and its likely uses despite never having seen that specific object before. The systems that mediate this process are only now coming to be understood through considerable research efforts combining neurological, psychological and behavioural studies. What is much less well understood are the differences between (...) the categories, how they are formed and how they are used by experts and non-experts in a complex task that can take decades to master. In a quite different direction to previous studies, this work infers the different categorical structures that might be used by amateurs and professionals in the oriental game of Go. This is achieved by using a newly developed combination of artificial neural networks (Self-organising Maps) and perceptual inference to show that categories of strategic scenes can be learned while playing games using a model of ‘conditional perceptual learning’. Applying this technique to two databases of games, one of amateurs and one of professionals, shows that a structural hierarchy of scene information develops that can be readily incorporated into traditional psychological models of decisions and readily implemented in computational systems. The results are discussed in terms of the heuristics and biases literature, emphasising where the significant similarities and differences lie between this work and previous work. (shrink)
A dynamic threshold, which controls the nature and course of learning, is a pivotal concept in Page's general localist framework. This commentary addresses various issues surrounding biologically plausible implementations for such thresholds. Relevant previous research is noted and the particular difficulties relating to the creation of so-called instance representations are highlighted. It is stressed that these issues also apply to distributed models.
Di erent from existing reinforcement learning algorithms that generate only reactive policies and existing probabilis tic planning algorithms that requires a substantial amount of a priori knowledge in order to plan we devise a two stage bottom up learning to plan process in which rst reinforce ment learning dynamic programming is applied without the use of a priori domain speci c knowledge to acquire a reactive policy and then explicit plans are extracted from the learned reactive policy Plan extraction is (...) based on a beam search algorithm that performs temporal projection in a restricted fashion guided by the value functions re sulting from reinforcement learning dynamic programming Experiments and theoretical analysis are presented.. (shrink)
In cognitive neuroscience, dissociating the brain networks that ing—has thus become one of the best empirical situations subtend conscious and nonconscious memories constitutes a through which to study the mechanisms of implicit learning, very complex issue, both conceptually and methodologically.
If artificial neural networks are ever to form the foundation for higher level cognitive behaviors in machines or to realize their full potential as explanatory devices for human cognition, they must show signs of autonomy, multifunction operation, and intersystem integration that are absent in most existing models. This model begins to address these issues by integrating predictive learning, sequence interleaving, and sequence creation components to simulate a spectrum of higher-order cognitive behaviors which have eluded the grasp of simpler systems. (...) Its capabilities are described based on simulations calling for increasing levels of functionality and are used to show how the model can progress from fundamental sequence learning and recall tasks to sophisticated behaviors such as an ability to solve simple mathematical expressions and a creative capacity for the formation and application of inductive rules. (shrink)
The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their solutions are given. (...) Some consequences are explored, in particular, the neural unsolvability of the Stability Problem for neural networks. (shrink)
This article provides a retrospective, current and prospective overview on developments in brain research and neuroscience. Both theoretical and empirical studies are considered, with emphasis in the concept of multivariability and metastability in the brain. In this new view on the human brain, the potential multivariability of the neuronal networks appears to be far from continuous in time, but confined by the dynamics of short-term local and global metastable brain states. The article closes by suggesting some of the implications (...) of this view in future multidisciplinary brain research. (shrink)
If connectionism is to be an adequate theory of mind, we must have a theory of representation for neural networks that allows for individual differences in weighting and architecture while preserving sameness, or at least similarity, of content. In this paper we propose a procedure for measuring sameness of content of neural representations. We argue that the correct way to compare neural representations is through analysis of the distances between neural activations, and we present a method for doing so. (...) We then use the technique to demonstrate empirically that different artificial neural networks trained by backpropagation on the same categorization task, even with different representational encodings of the input patterns and different numbers of hidden units, reach states in which representations at the hidden units are similar. We discuss how this work provides a rebuttal to Fodor and Lepore's critique of Paul Churchland's state space semantics. (shrink)
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face recognition to visual word recognition; the model implements a theory of hemispheric asymmetry in perception that posits low spatial frequency biases in the right hemisphere and high spatial frequency (HSF) biases in the LH. We show (...) two factors that can influence lateralization: (a) Visual similarity among words: The more similar the words in the lexicon look visually, the more HSF/LH processing is required to distinguish them, and (b) Requirement to decompose words into graphemes for grapheme-phoneme mapping: Alphabetic reading (involving grapheme-phoneme conversion) requires more HSF/LH processing than logographic reading (no grapheme-phoneme mapping). These factors may explain the difference in lateralization between English and Chinese orthographic processing. (shrink)
A fundamental question in reading research concerns whether attention is allocated strictly serially, supporting lexical processing of one word at a time, or in parallel, supporting concurrent lexical processing of two or more words (Reichle, Liversedge, Pollatsek, & Rayner, 2009). The origins of this debate are reviewed. We then report three simulations to address this question using artificial reading agents (Liu & Reichle, 2010; Reichle & Laurent, 2006) that learn to dynamically allocate attention to 1–4 words to “read” as efficiently (...) as possible. These simulation results indicate that the agents strongly preferred serial word processing, although they occasionally attended to more than one word concurrently. The reason for this preference is discussed, along with implications for the debate about how humans allocate attention during reading. (shrink)
In their account of learning and behavior, the authors define an interactor as emitted behavior that operates on the environment, which excludes Pavlovian learning. A unified neural-network account of the operant-Pavlovian dichotomy favors interpreting neurons as interactors and synaptic efficacies as replicators. The latter interpretation implies that single-synapse change is inherently Lamarckian.
Neuronal aggregates involved in conscious awareness are not evenly distributed throughout the CNS but comprise key components referred to as the neural network correlates of consciousness (NNCC). A critical node in this network is the posterior cingulate, precuneal, and retrosplenial cortices. The cytological and neurochemical composition of this region is reviewed in relation to the Brodmann map. This region has the highest level of cortical glucose metabolism and cytochrome c oxidase activity. Monkey studies suggest that the anterior thalamic projection likely (...) drives retrosplenial and posterior cingulate cortex metabolism and that the midbrain projection to the anteroventral thalamic nucleus is a key coupling site between the brainstem system for arousal and cortical systems for cognitive processing and awareness. The pivotal role of the posterior cingulate, precuneal, and retrosplenial cortices in consciousness is demonstrated with posterior cingulate epilepsy cases, midcingulate lesions that de-afferent this region and are associated with unilateral sensory neglect, observations from stroke and vegetative state patients, alterations in blood ﬂow during sleep, and the actions of general anesthetics. Since this region is critically involved in self reﬂection, it is not surprising that it is similarly a site for the NNCC. Interestingly, information processing during complex cognitive tasks and during aversive sensations such as pain induces efforts to terminate self reﬂection and result in decreased processing in posterior cingulate and precuneal cortices. (shrink)
It is possible to predict future life forms? In this paper it is argued that the answer to this question may well be positive. As a basis for predictions a rationale is used that is derived from historical data, e.g. from a hierarchical classification that ranks all building block systems, that have evolved so far. This classification is based on specific emergent properties that allow stepwise transitions, from low level building blocks to higher level ones. This paper shows how this (...) hierarchy can be used for predicting future life forms.The extrapolations suggest several future neural network organisms. Major aspects of the structures of these organisms are predicted. The results can be considered of fundamental importance for several reasons. Firstly, assuming that the operator hierarchy is a proper basis for predictions, the result yields insight into the structure of future organisms. Secondly, the predictions are not extrapolations of presently observed trends, but are fully integrated with all historical system transitions in evolution. Thirdly, the extrapolations suggest the structures of intelligences that, one day, will possess more powerful brains than human beings. (shrink)
Much recent research has sought to uncover the neural basis of moral judgment. However, it has remained unclear whether "moral judgments" are sufficiently homogenous to be studied scientifically as a unified category. We tested this assumption by using fMRI to examine the neural correlates of moral judgments within three moral areas: (physical) harm, dishonesty, and (sexual) disgust. We found that the judgment ofmoral wrongness was subserved by distinct neural systems for each of the different moral areas and that these differences (...) were much more robust than differences in wrongness judgments within a moral area. Dishonest, disgusting, and harmful moral transgression recruited networks of brain regions associated with mentalizing, affective processing, and action understanding, respectively. Dorsal medial pFC was the only region activated by all scenarios judged to be morally wrong in comparison with neutral scenarios. However, this region was also activated by dishonest and harmful scenarios judged not to be morally wrong, suggestive of a domain-general role that is neither peculiar to nor predictive of moral decisions. These results suggest that moral judgment is not a wholly unified faculty in the human brain, but rather, instantiated in dissociable neural systems that are engaged differentially depending on the type of transgression being judged. (shrink)
A technique for the bilateral activation of neural nets that leads to a functional asymmetry of two simulated ''cerebral hemispheres'' is described. The simulation is designed to perform object recognition, while exhibiting characteristics typical of human consciousness-specifically, the unitary nature of conscious attention, together with a dual awareness corresponding to the ''nucleus'' and ''fringe'' described by William James (1890). Sensory neural nets self-organize on the basis of five sensory features. The system is then taught arbitrary symbolic labels for a small (...) number of similar stimuli. Finally, the trained network is exposed to nonverbal stimuli for object recognition, leading to Gaussian activation of the ''sensory'' maps-with a peak at the location most closely related to the features of the external stimulus. ''Verbal'' maps are activated most strongly at the labeled location that lies closest to the peak on homologous sensory maps. On the verbal maps activation is characterized by both excitatory and inhibitory Gaussians (a Mexican hat), the parameters of which are determined by the relative locations of the verbal labels. Mutual homotopic inhibition across the ''corpus callosum'' then produces functional cerebral asymmetries, i.e., complementary activation of homologous ''association'' and ''frontal'' maps within a common focus of attention-a nucleus in the left hemisphere and a fringe in the right hemisphere. An object is recognized as corresponding to a known label when the total activation of both hemispheres (nucleus plus fringe) is strongest for that label. The functional dualities of the cerebral hemispheres are discussed in light of the nucleus/fringe asymmetry. (shrink)
The Page target article is interesting because of apparent coverage of many psychological phenomena with simple, unified neural techniques. However, prototype phenomena cannot be covered because the strongest response would be to the first-learned stimulus in each category rather than to a prototype stimulus or most frequently presented stimuli. Alternative methods using distributed coding can also achieve portability of network knowledge.
Building a meaningful model of biological regulatory network is usually done by specifying the components (e.g. the genes) and their interactions, by guessing the values of parameters, by comparing the predicted behaviors to the observed ones, and by modifying in a trial-error process both architecture and parameters in order to reach an optimal fitness. We propose here a different approach to construct and analyze biological models avoiding the trial-error part, where structure and dynamics are represented as formal constraints. We apply (...) the method to Hopfield-like networks, a formalism often used in both neural and regulatory networks modeling. The aim is to characterize automatically the set of all models consistent with all the available knowledge (about structure and behavior). The available knowledge is formalized into formal constraints. The latter are compiled into Boolean formula in conjunctive normal form and then submitted to a Boolean satisfiability solver. This approach allows to formulate a wide range of queries, expressed in a high level language, and possibly integrating formalized intuitions. In order to explore its potential, we use it to find cycles for 3-nodes networks and to determine the flower morphogenesis regulatory network of Arabidopsis thaliana . Applications of this technique are numerous and concern the building of models from data as well as the design of biological networks possessing specified behaviors. (shrink)
This article discusses the properties of a controllable, flexible, hybrid parallel computing architecture that potentially merges pattern recognition and arithmetic. Humans perform integer arithmetic in a fundamentally different way than logic-based computers. Even though the human approach to arithmetic is both slow and inaccurate it can have substantial advantages when useful approximations ( intuition ) are more valuable than high precision. Such a computational strategy may be particularly useful when computers based on nanocomponents become feasible because it offers a way (...) to make use of the potential power of these massively parallel systems. Because the system architecture is inspired by neuroscience and is applied to cognitive problems, occasional mention is made of experimental data from both fields when appropriate. (shrink)
In "Brainshy: Non-neural theories of conscious experience," (this volume) Patricia Churchland considers three "non-neural" approaches to the puzzle of consciousness: 1) Chalmers' fundamental information, 2) Searle's "intrinsic" property of brain, and 3) Penrose-Hameroff quantum phenomena in microtubules. In rejecting these ideas, Churchland flies the flag of "neuralism." She claims that conscious experience will be totally and completely explained by the dynamical complexity of properties at the level of neurons and neural networks. As far as consciousness goes, neural network firing (...) patterns triggered by axon-to-dendrite synaptic chemical transmissions are the fundamental correlates of consciousness. There is no need to look elsewhere. (shrink)
We propose category theory, the mathematical theory of structure, as a vehicle for defining ontologies in an unambiguous language with analytical and constructive features. Specifically, we apply categorical logic and model theory, based upon viewing an ontology as a sub-category of a category of theories expressed in a formal logic. In addition to providing mathematical rigor, this approach has several advantages. It allows the incremental analysis of ontologies by basing them in an interconnected hierarchy of theories, with an operation on (...) the hierarchy that expresses the formation of complex theories from simple theories that express first principles. Another operation forms abstractions expressing the shared concepts in an array of theories. The use of categorical model theory makes possible the incremental analysis of possible worlds, or instances, for the theories, and the mapping of instances of a theory to instances of its more abstract parts. We describe the theoretical approach by applying it to the semantics of neural networks. (shrink)
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we (...) examine what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks aren’t genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks. (shrink)
Possible systemic effects of general anesthetic agents on neural information processing are discussed in the context of the thalamocortical suppression hypothesis presented by Drs. Alkire, Haier, and Fallon (this issue) in their PET study of the anesthetized state. Accounts of the neural requisites of consciousness fall into two broad categories. Neuronal-specificity theories postulate that activity in particular neural populations is sufficient for conscious awareness, while process-coherence theories postulate that particular organizations of neural activity are sufficient. Accounts of anesthetic narcosis, on (...) the other hand, explain losses of consciousness in terms of neural signal-suppressions, transmission blocks, and the disruptions of signal interpretation. While signal-suppression may account for the actions of some anesthetic agents, the existence of anesthetics, such as choralose, that cause both loss of consciousness and elevated discharge rates, is problematic for a general theory of narcosis that is based purely on signal suppression and transmission-block. However, anesthetic agents also alter relative firing rates and temporal discharge patterns that may disrupt the coherence of neural signals and the functioning of the neural networks that interpret them. It is difficult at present, solely on the basis of regional brain metabolic rates, to test process-coherence hypotheses regarding organizational requisites for conscious awareness. While these pioneering PET studies have great merit as panoramic windows of mind-brain correlates, wider ranges of theory and empirical evidence need to be brought into the formulation of truly comprehensive theories of consciousness and anesthesia. (shrink)
A categorical, higher dimensional algebra and generalized topos framework for Łukasiewicz–Moisil Algebraic–Logic models of non-linear dynamics in complex functional genomes and cell interactomes is proposed. Łukasiewicz–Moisil Algebraic–Logic models of neural, genetic and neoplastic cell networks, as well as signaling pathways in cells are formulated in terms of non-linear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable ‘next-state functions’ is extended to (...) a Łukasiewicz–Moisil Topos with an n-valued Łukasiewicz–Moisil Algebraic Logic subobject classifier description that represents non-random and non-linear network activities as well as their transformations in developmental processes and carcinogenesis. The unification of the theories of organismic sets, molecular sets and Robert Rosen’s (M,R)-systems is also considered here in terms of natural transformations of organismal structures which generate higher dimensional algebras based on consistent axioms, thus avoiding well known logical paradoxes occurring with sets. Quantum bionetworks, such as quantum neural nets and quantum genetic networks, are also discussed and their underlying, non-commutative quantum logics are considered in the context of an emerging Quantum Relational Biology. (shrink)