Page's target article presents an argument for the use of localist, connectionist models in future psychological theorising. The “manifesto” marshalls a set of arguments in favour of localist connectionism and against distributed connectionism, but in doing so misses a larger argument concerning the level of psychological explanation that is appropriate to a given domain.
In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then (...) introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research. (shrink)
The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we (...) see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning. (shrink)
This paper critically examines the claim that parallel distributed processing (PDP) networks are autonomous learning systems. A PDP model of a simple distributed associative memory is considered. It is shown that the 'generic' PDP architecture cannot implement the computations required by this memory system without the aid of external control. In other words, the model is not autonomous. Two specific problems are highlighted: (i) simultaneous learning and recall are not permitted to occur as would be required of an autonomous system; (...) (ii) connections between processing units cannot simultaneously represent current and previous network activation as would be required if learning is to occur. Similar problems exist for more sophisticated networks constructed from the generic PDP architecture. We argue that this is because these models are not adequately constrained by the properties of the functional architecture assumed by PDP modelers. It is also argued that without such constraints, PDP researchers cannot claim to have developed an architecture radically different from that proposed by the Classical approach in cognitive science. (shrink)
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the (...) phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)
Brains, unlike artiﬁcial neural nets, use sym- bols to summarise and reason about percep- tual input. But unlike symbolic AI, they “ground” the symbols in the data: the sym- bols have meaning in terms of data, not just meaning imposed by the outside user. If neu- ral nets could be made to grow their own sym- bols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as (...) to combine the good features of each. (shrink)
Churchland underestimates the power and purpose of the Turing Test, dismissing it as the trivial game to which the Loebner Prize (offered for the computer program that can fool judges into thinking it's human) has reduced it, whereas it is really an exacting empirical criterion: It requires that the candidate model for the mind have our full behavioral capacities -- so fully that it is indistinguishable from any of us, to any of us (not just for one Contest night, but (...) for a lifetime). Scaling up to such a model is (or ought to be) the programme of that branch of reverse bioengineering called cognitive science. It's harmless enough to do the hermeneutics after the research has been successfully completed, but self-deluding and question-begging to do it before. (shrink)
In the first section of the article, we examine some recent criticisms of the connectionist enterprise: first, that connectionist models are fundamentally behaviorist in nature (and, therefore, non-cognitive), and second that connectionist models are fundamentally associationist in nature (and, therefore, cognitively weak). We argue that, for a limited class of connectionist models (feed-forward, pattern-associator models), the first criticism is unavoidable. With respect to the second criticism, we propose that connectionist modelsare fundamentally associationist but that this is appropriate for building models (...) of human cognition. However, we do accept the point that there are cognitive capacities for which any purely associative model cannot provide a satisfactory account. The implication that we draw from is this is not that associationist models and mechanisms should be scrapped, but rather that they should be enhanced.In the next section of the article, we identify a set of connectionist approaches which are characterized by “active symbols” — recurrent circuits which are the basis of knowledge representation. We claim that such approaches avoid criticisms of behaviorism and are, in principle, capable of supporting full cognition. In the final section of the article, we speculate at some length about what we believe would be the characteristics of a fully realized active symbol system. This includes both potential problems and possible solutions (for example, mechanisms needed to control activity in a complex recurrent network) as well as the promise of such systems (in particular, the emergence of knowledge structures which would constitute genuine internal models). (shrink)
Classical symbolic computational models of cognition are at variance with the empirical findings in the cognitive psychology of memory and inference. Standard symbolic computers are well suited to remembering arbitrary lists of symbols and performing logical inferences. In contrast, human performance on such tasks is extremely limited. Standard models donot easily capture content addressable memory or context sensitive defeasible inference, which are natural and effortless for people. We argue that Connectionism provides a more natural framework in which to model this (...) behaviour. In addition to capturing the gross human performance profile, Connectionist systems seem well suited to accounting for the systematic patterns of errors observed in the human data. We take these arguments to counter Fodor and Pylyshyn's (1988) recent claim that Connectionism is, in principle, irrelevant to psychology. (shrink)
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
Recently, connectionist models have been developed that seem to exhibit structuresensitive cognitive capacities without executing a program. This paper examines one such model and argues that it does execute a program. The argument proceeds by showing that what is essential to running a program is preserving the functional structure of the program. It has generally been assumed that this can only be done by systems possessing a certain temporalcausal organization. However, counterfactualpreserving functional architecture can be instantiated in other ways, for (...) example geometrically, which are realizable by connectionist networks. (shrink)