The standard approach to model how human beings understand natural languages is the symbolic, compositional approach according to which the meaning of a complex expression is a function of the meanings of its constituents. In other words, meaning plays a fundamental role in the model. In this work, because of the polysemous, flexible, dynamic, and contextual structure of natural languages, this approach is rejected. Instead, a connectionist model which eliminates the concept of meaning is proposed.
This paper aims to offer a new view of the role of connectionist models in the study of human cognition through the conceptualization of the history of connectionism – from the simplest perceptrons to convolutional neural nets based on deep learning techniques, as well as through the interpretation of criticism coming from symbolic cognitive science. Namely, the connectionist approach in cognitive science was the target of sharp criticism from the symbolists, which on several occasions caused its marginalization and almost complete (...) abandonment of its assumptions in the study of cognition. Criticisms have mostly pointed to its explanatory inadequacy as a theory of cognition or to its biological implausibility as a theory of implementation, and critics often focused on specific shortcomings of some connectionist models and argued that they apply to connectionism in general. In this paper, we want to show that both types of critique are based on the assumption that the only valid explanations in cognitive science are instances of homuncular functionalism and that by removing this assumption and by adopting an alternative methodology – exploratory mechanistic strategy, we can reject most objections to connectionism as irrelevant, explain the progress of connectionist models despite their shortcomings and sketch the trajectory of their future development. By adopting mechanistic explanations and by criticizing functionalism, we will reject the objections of explanatory inadequacy, by characterizing connectionist models as generic rather than concrete mechanisms, we will reject the objections of biological implausibility, and by attributing the exploratory character to connectionist models we will show that practice of generalizing current to general failures of connectionism is unjustified. (shrink)
Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...) solution for the so-called NP-complete problems will also be a solution for any other such problems. Its artificial-intelligence analogue is the class of AI-complete problems, for which a complete mathematical formalization still does not exist. In this chapter we will focus on analysing computational classes to better understand possible formalizations of AI-complete problems, and to see whether a universal algorithm, such as a Turing test, could exist for all AI-complete problems. In order to better observe how modern computer science tries to deal with computational complexity issues, we present several different deep-learning strategies involving optimization methods to see that the inability to exactly solve a problem from a higher order computational class does not mean there is not a satisfactory solution using state-of-the-art machine-learning techniques. Such methods are compared to philosophical issues and psychological research regarding human abilities of solving analogous NP-complete problems, to fortify the claim that we do not need to have an exact and correct way of solving AI-complete problems to nevertheless possibly achieve the notion of strong AI. (shrink)
There is a vast literature within philosophy of mind that focuses on artificial intelligence, but hardly mentions methodological questions. There is also a growing body of work in philosophy of science about modeling methodology that hardly mentions examples from cognitive science. Here these discussions are connected. Insights developed in the philosophy of science literature about the importance of idealization provide a way of understanding the neural implausibility of connectionist networks. Insights from neurocognitive science illuminate how relevant similarities between models and (...) targets are picked out, how modeling inferences are justified, and the metaphysical status of models. (shrink)
The research goals of this report are: 1) How do RE teachers’ personal beliefs and worldviews relate to their professional motivations? 2) How do RE teachers negotiate religious diversity? 3) What do RE teachers think about RE and pupils’ character development? 4) What differences in beliefs about pupils’ character development are there between RE teachers holding different worldviews? -/- How was this study completed? This study explored the lives of RE teachers using a mixed-method design, comprising an interview phase followed (...) by a survey. This approach allowed for inductive inferences to be made from the interviews, which could be then substantiated through the deductive testing of preliminary hypotheses with the construction of the survey instrument. For each phase, a separate non-probabilistic sample of practising RE teachers who taught RE as their main specialism was recruited through professional organisations and advertisements, including social media. -/- The first, qualitative phase of the study was inspired by the narrative identity paradigm (McAdams, 1996; 2013; McAdams and Guo, 2015). This uses semi-structured interviews to explore participants’ self-understandings of the development of the course of their lives. In addition to standard questions used in this paradigm, the interview schedule also included questions about teachers’ perspectives on RE and character development. The second, quantitative phase, was designed drawing on initial analyses of the interviews and employed measures of religious practice and style, as well as individual items about RE teachers’ perceptions of character education. The data generated from these questions allowed for analyses of the relationships between RE teachers’ worldviews, their perspectives on character education and their professional motivations. -/- There were four key findings. These are: 1) Personal worldviews informed RE teachers’ approaches in the classroom: RE teachers working in faith and non-faith schools were found to have a diverse range of personal worldviews – from atheism to theism, and all positions in between – but each kind of worldview supports a particular vision of what RE should be, and therefore generates an individual’s motivation to be an RE teacher. 2) RE teachers were found to have fair and tolerant views of other religions and worldviews: RE teachers who did or did not have a religious faith, in faith and non-faith schools, were found to have a fair and tolerant approach to religious diversity. However, this study’s findings suggest that RE teachers that have a religious faith were more open to interreligious dialogue and learning from other religions. 3) There was strong agreement among teachers with a religious faith that RE contributes to character education, and RE teachers should act as role models for their pupils. 4) RE teachers that have a religious faith were more likely to think religions promote good character: There were significant differences in perspectives between RE teachers who reported belonging to a religion, and those who did not. The former were found to be more likely to think that religious traditions provide a source of good role models; they were also more likely to care about their impact on pupils’ religious beliefs and to believe pupils emulate their religious views. -/- The reference for this research report is: Arthur, J.; Moulin-Stożek, D.; Metcalfe, J. and Moller, F. (2019) Religious Education Teachers and Character: Personal Beliefs and Professional Approaches, Research Report, Birmingham: University of Birmingham. -/- This report is freely available for download. (shrink)
Dead reckoning is a feature of the navigation behaviour shown by several creatures, including the desert ant. Recent work by C. Randy Gallistel shows that some connectionist models of dead reckoning face important challenges. These challenges are thought to arise from essential features of the connectionist approach, and have therefore been taken to show that connectionist models are unable to explain even the most primitive of psychological phenomena. I show that Gallistel’s challenges are successfully met by one recent connectionist model, (...) proposed by Ulysses Bernardet, Sergi Bermúdez i Badia, and Paul F.M.J. Verschure. The success of this model suggests that there are ways to implement dead reckoning with neural circuits that fall within the bounds of what many people regard as neurobiologically plausible, and so that the wholesale dismissal of the connectionist modelling project remains premature. (shrink)
Abstract (for the combined three Parts) This paper presents the simplest known theory of processes involved in a person’s unconscious and conscious achievements such as intending, perceiving, reacting and thinking. The basic principle is that an individual has mental states which possess quantitative causal powers and are susceptible to influences from other mental states. Mental performance discriminates the present level of a situational feature from its level in an individually acquired, multiple featured norm (exemplar, template, standard). The effect on output (...) of a moderate disparity between input and norm is scaled in a universal unit of discrimination (Weber’s fraction), with the norm’s level being zero. When one process converts separate sources of input into an output, their discriminative distances from norm are summated. Distinct processes converging on an output combine their discriminations from norm orthogonally. An output may be influenced by the constructs of other outputs as well as by inputs. Descriptive performance is the influence of one category of input on a verbal output. Reasoning is minimally the effect of one verbal process on another. In deeper mental processing, the influence on a response comes from a response construct modulating a description: this process gives the meaning to an emotion or a motive. Descriptive modulation of stimulation corresponds to a bodily sensation or other conceptualized percept. When an output is explained solely by sources of input, that response to the stimulation may be mediated unconsciously. Development of a person within physical and communal environments embodies such mental causation within material causation and acculturates that mind to social causation. (shrink)
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the (...) phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)
This paper investigates the relationship between reality and model, information and truth. It will argue that meaningful data need not be true in order to constitute information. Information to which truth-value cannot be ascribed, partially true information or even false information can lead to an interesting outcome such as technological innovation or scientific breakthrough. In the research process, during the transition between two theoretical frameworks, there is a dynamic mixture of old and new concepts in which truth is not well (...) defined. Instead of veridicity, correctness of a model and its appropriateness within a context are commonly required. Despite empirical models being in general only truthlike, they are nevertheless capable of producing results from which conclusions can be drawn and adequate decisions made. (shrink)
Recently, connectionist models have been developed that seem to exhibit structuresensitive cognitive capacities without executing a program. This paper examines one such model and argues that it does execute a program. The argument proceeds by showing that what is essential to running a program is preserving the functional structure of the program. It has generally been assumed that this can only be done by systems possessing a certain temporalcausal organization. However, counterfactualpreserving functional architecture can be instantiated in other ways, for (...) example geometrically, which are realizable by connectionist networks. (shrink)
Page's target article presents an argument for the use of localist, connectionist models in future psychological theorising. The “manifesto” marshalls a set of arguments in favour of localist connectionism and against distributed connectionism, but in doing so misses a larger argument concerning the level of psychological explanation that is appropriate to a given domain.
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
It is crucial, first of all, to stress the importance Churchland attaches to the idea that the neural networks whose assemblages he holds to be “engines of reason” must be recurrent. Non-recurrent networks, of the sort best known among philosophers, simply discover patterns in input data presented to them as sets of features. The learning capacities of such networks, extensively discussed since the publication of Rumelhart and McClelland et al., are indeed impressive; and Churchland describes them clearly and gracefully as (...) preparation for introducing recurrent networks. Now, the importance of recurrence as a feature of the networks Churchland hypothesizes as forming the basis of cognitive activity is well motivated. A cognitive system that was an assembly of non-recurrent networks would be, in essence, a stimulus-response machine of the sort that early behaviourists took themselves to be studying when they examined the mind/brain. Such a system could learn to find novel patterns in data. However, it is empirically evident that human— and some non-human—cognitive capacities go well beyond this. People dream, imagine scenarios that never existed, reconceptualize their perceptions, and theorize. All these activities seem to require not merely inferring patterns from data, but reconceiving the nature of the data itself in light of the inferential structures built through learning. For this to be possible, the data cannot be “clamped,” as they are in the case of non-recurrent networks. Hence the importance of recurrence. A recurrent network can be defined as one which takes some or all of its own output and then treats that output as a source of input. This allows for the recognition of meta-patterns. Furthermore, if some such networks are feeding their output as input to other networks with which they are linked, the possibility of reasoning and conceptualizing by analogy arises; and this ability, as Churchland argues, seems to be an essential aspect of genuine creativity in science, art, and the continuously ongoing re-design of the social order. The actual existence of simulated recurrent networks, and the capacities they demonstrate, constitute an impressive possibility argument that a system with enormous cognitive plasticity and behavioural flexibility could be built out of parallel distributed processing networks, so long as at least many of these networks were recurrent. The importance of this possibility argument should not be under-emphasized, but it of course invites an obvious question: how good are our grounds for supposing that the possibility in question is actually realized in the biological domain? (shrink)
Some regularities enjoy only an attenuated existence in a body of training data. These are regularities whose statistical visibility depends on some systematic recoding of the data. The space of possible recodings is, however, infinitely large type-2 problems. they are standardly solved! This presents a puzzle. How, given the statistical intractability of these type-2 cases, does nature turn the trick? One answer, which we do not pursue, is to suppose that evolution gifts us with exactly the right set of recoding (...) biases so as to reduce specific type-2 problems to (tractable) type-1 mappings. Such a heavy-duty nativism is no doubt sometimes plausible. But we believe there are other, more general mechanisms also at work. Such mechanisms provide general (not task-specific) strategies for managing problems of type-2 complexity. Several such mechanisms are investigated. At the heart of each is a fundamental ploy representational redescription language and culture – may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale. (shrink)
We argue that existing learning algorithms are often poorly equipped to solve problems involving a certain type of important and widespread regularity that we call “type-2 regularity.” The solution in these cases is to trade achieved representation against computational search. We investigate several ways in which such a trade-off may be pursued including simple incremental learning, modular connectionism, and the developmental hypothesis of “representational redescription.”.
Brains, unlike artiﬁcial neural nets, use symbols to summarise and reason about perceptual input. But unlike symbolic AI, they “ground” the symbols in the data: the symbols have meaning in terms of data, not just meaning imposed by the outside user. If neural nets could be made to grow their own symbols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as to combine the good features (...) of each. The article argues the cluster analysis provides algorithms to perform the task, and that any solution to the task must be a form of cluster analysis. (shrink)
Churchland underestimates the power and purpose of the Turing Test, dismissing it as the trivial game to which the Loebner Prize (offered for the computer program that can fool judges into thinking it's human) has reduced it, whereas it is really an exacting empirical criterion: It requires that the candidate model for the mind have our full behavioral capacities -- so fully that it is indistinguishable from any of us, to any of us (not just for one Contest night, but (...) for a lifetime). Scaling up to such a model is (or ought to be) the programme of that branch of reverse bioengineering called cognitive science. It's harmless enough to do the hermeneutics after the research has been successfully completed, but self-deluding and question-begging to do it before. (shrink)
The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we (...) see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning. (shrink)
Hubert and Stuart Dreyfus have tried to place connectionism and artificial intelligence in a broader historical and intellectual context. This history associates connectionism with neuroscience, conceptual holism, and nonrationalism, and artificial intelligence with conceptual atomism, rationalism, and formal logic. The present paper argues that the Dreyfus account of connectionism and artificial intelligence is both historically and philosophically misleading.
This paper critically examines the claim that parallel distributed processing (PDP) networks are autonomous learning systems. A PDP model of a simple distributed associative memory is considered. It is shown that the 'generic' PDP architecture cannot implement the computations required by this memory system without the aid of external control. In other words, the model is not autonomous. Two specific problems are highlighted: (i) simultaneous learning and recall are not permitted to occur as would be required of an autonomous system; (...) (ii) connections between processing units cannot simultaneously represent current and previous network activation as would be required if learning is to occur. Similar problems exist for more sophisticated networks constructed from the generic PDP architecture. We argue that this is because these models are not adequately constrained by the properties of the functional architecture assumed by PDP modelers. It is also argued that without such constraints, PDP researchers cannot claim to have developed an architecture radically different from that proposed by the Classical approach in cognitive science. (shrink)
In the first section of the article, we examine some recent criticisms of the connectionist enterprise: first, that connectionist models are fundamentally behaviorist in nature (and, therefore, non-cognitive), and second that connectionist models are fundamentally associationist in nature (and, therefore, cognitively weak). We argue that, for a limited class of connectionist models (feed-forward, pattern-associator models), the first criticism is unavoidable. With respect to the second criticism, we propose that connectionist modelsare fundamentally associationist but that this is appropriate for building models (...) of human cognition. However, we do accept the point that there are cognitive capacities for which any purely associative model cannot provide a satisfactory account. The implication that we draw from is this is not that associationist models and mechanisms should be scrapped, but rather that they should be enhanced.In the next section of the article, we identify a set of connectionist approaches which are characterized by “active symbols” — recurrent circuits which are the basis of knowledge representation. We claim that such approaches avoid criticisms of behaviorism and are, in principle, capable of supporting full cognition. In the final section of the article, we speculate at some length about what we believe would be the characteristics of a fully realized active symbol system. This includes both potential problems and possible solutions (for example, mechanisms needed to control activity in a complex recurrent network) as well as the promise of such systems (in particular, the emergence of knowledge structures which would constitute genuine internal models). (shrink)
These sixty contributions from researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields delve into the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. They focus in particular on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Jean-Arcady Meyer is Director of Research at CNRS, Paris. Stewart W. Wilson is a Scientist at The (...) Rowland Institute for Science, Cambridge, Massachusetts. (shrink)
Classical symbolic computational models of cognition are at variance with the empirical findings in the cognitive psychology of memory and inference. Standard symbolic computers are well suited to remembering arbitrary lists of symbols and performing logical inferences. In contrast, human performance on such tasks is extremely limited. Standard models donot easily capture content addressable memory or context sensitive defeasible inference, which are natural and effortless for people. We argue that Connectionism provides a more natural framework in which to model this (...) behaviour. In addition to capturing the gross human performance profile, Connectionist systems seem well suited to accounting for the systematic patterns of errors observed in the human data. We take these arguments to counter Fodor and Pylyshyn's (1988) recent claim that Connectionism is, in principle, irrelevant to psychology. (shrink)
In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then (...) introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research. (shrink)