Several authors have hailed intuition as one of the defining features of expertise. In particular, while disagreeing on almost anything that touches on human cognition and artificial intelligence, Hubert Dreyfus and Herbert Simon agreed on this point. However, the highly influential theories of intuition they proposed differed in major ways, especially with respect to the role given to search and as to whether intuition is holistic or analytic. Both theories suffer from empirical weaknesses. In this paper, we show how, with (...) some additions, a recent theory of expert memory (the template theory) offers a coherent and wide-ranging explanation of intuition in expert behaviour. It is shown that the theory accounts for the key features of intuition: it explains the rapid onset of intuition and its perceptual nature, provides mechanisms for learning, incorporates processes showing how perception is linked to action and emotion, and how experts capture the entirety of a situation. In doing so, the new theory addresses the issues problematic for Dreyfus’s and Simon’s theories. Implications for research and practice are discussed. (shrink)
In several papers, Hubert Dreyfus has used chess as a paradigmatic example of how experts act intuitively, rarely using deliberation when selecting actions, while individuals that are only competent rely on analytic and deliberative thought. By contrast, Montero and Evans (Phenomenology and the Cognitive Sciences 10:175–194, 2011 ) argue that intuitive aspects of chess are actually rational, in the sense that actions can be justified. In this paper, I show that both Dreyfus’s and Montero and Evans’s views are too extreme, (...) and that expertise in chess, and presumably in other domains, depends on a combination of intuitive thinking and deliberative search, both mediated by perceptual processes. There is more to expertise than just rational thought. I further contend that both sides ignore emotions, which are important in acquiring and maintaining expertise. Finally, I argue that experimental data and first-person data, which are sometimes presented as irreconcilable in the phenomenology literature, actually lead to similar conclusions. (shrink)
In a famous study of expert problem solving, de Groot (1946/1978) examined how chess players found the best move. He reported that there was little difference in the way that the best players (Grand Masters) and very good players (Candidate Masters) searched the board. Although this result has been regularly cited in studies of expertise, it is frequently misquoted. It is often claimed that de Groot found no difference in the way that experts and novices investigate a problem. Comparison of (...) expert and novice chess players on de Groot's problem shows that there are clear differences in their search patterns. We discuss the troublesome theoretical and practical consequences of incorrectly reporting de Groot's findings. (shrink)
Understanding how look-ahead search and pattern recognition interact is one of the important research questions in the study of expert problem solving. This paper examines the implications of the template theory Gobet & Simon, 1996a , a recent theory of expert memory, on the theory of problem solving in chess. Templates are chunks Chase & Simon, 1973 that have evolved into more complex data structures and that possess slots allowing values to be encoded rapidly. Templates may facilitate search in three (...) ways: a by allowing information to be stored into LTM rapidly; b by allowing a search in the template space in addition to a search in the move space; and c by compensating loss in the minds eye due to interference and decay. A computer model implementing the main ideas of the theory is presented, and simulations of its search behaviour are discussed. The template theory accounts for the slight skill difference in average depth of search found in chess players, as well as for other empirical data. (shrink)
In this study we use a computational model of language learning called model of syntax acquisition in children (MOSAIC) to investigate the extent to which the optional infinitive (OI) phenomenon in Dutch and English can be explained in terms of a resource-limited distributional analysis of Dutch and English child-directed speech. The results show that the same version of MOSAIC is able to simulate changes in the pattern of finiteness marking in 2 children learning Dutch and 2 children learning English as (...) the average length of their utterances increases. These results suggest that it is possible to explain the key features of the OI phenomenon in both Dutch and English in terms of the interaction between an utterance-final bias in learning and the distributional characteristics of child-directed speech in the 2 languages. They also show how computational modeling techniques can be used to investigate the extent to which cross-linguistic similarities in the developmental data can be explained in terms of common processing constraints as opposed to innate knowledge of universal grammar. (shrink)
The relation between mind and brain is one of the big scientific questions that has attracted scientists’ attention for centuries but also eluded their understanding. In this book, William Uttal provides a critical review of cognitive neuroscience, focusing on a specific question: What do the brain-imaging techniques developed in the last two decades or so—mostly functional magnetic resonance imaging and positron emission tomography —tell us about the brain-mind problem? His unambiguous and abrasive answer is: nothing.The book is organized in nine (...) chapters. The introductory chapter provides historical, methodological, and philosophical background. Importantly, it highlights a shift in the way neuroscientists think about modularity and localization. Traditionally, researchers using brain imaging have tended to subscribe to a strong view of modularity and localization, where distinct cognitive modules are assumed to be localized in well-defined regions of the brain. In the l .. (shrink)
The development of computational models to provide explanations of psychological data can be achieved using semi-automated search techniques, such as genetic programming. One challenge with these techniques is to control the type of model that is evolved to be cognitively plausible – a typical problem is that of “bloating”, where continued evolution generates models of increasing size without improving overall fitness. In this paper we describe a system for representing psychological data, a class of process-based models, and algorithms for evolving (...) models. We apply this system to the delayed match-to-sample task. We show how the challenge of bloating may be addressed by extending the fitness function to include measures of cognitive performance. (shrink)
What is ‘counterintuitive’? There is general agreement that it refers to a violation of previously held knowledge, but the precise definition seems to vary with every author and study. The aim of this paper is to deconstruct the notion of ‘counterintuitive’ and provide a more philosophically rigorous definition congruent with the history of psychology, recent experimental work in ‘minimally counterintuitive’ concepts, the science vs. religion debate, and the developmental and evolutionary background of human beings. We conclude that previous definitions of (...) counterintuitiveness have been flawed and did not resolve the conflict between a believer’s conception of the supernatural entity (an atypical “real kind”) and the non-believer’s conception (empty name/fictional). Furthermore, too much emphasis has been placed on the universality and (presumed) innateness of intuitive concepts (and hence the criteria for what is counterintuitive)—and far too little attention paid to learning and expertise. We argue that many putatively universal concepts are not innate, but mostly learned and defeasible—part of a religious believer’s repertoire of expert knowledge. Nonetheless, the results from empirical studies about the memorability of counterintuitive concepts have been convincing and it is difficult to improve on existing designs and methodologies. However, future studies in counterintuitive concepts need to embed their work in research about context effects, typicality, the psychology of learning and expertise (for example, the formation of expert templates and range defaults), with more attention to the sources of knowledge (direct and indirect knowledge) and a better idea of what ‘default’ knowledge really is. (shrink)
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming (GP). Our approach evolves from experimental data cognitive theories that explain “the mental (...) program” that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories. (shrink)
Pioneering work in the 1940s and 1950s suggested that the concept of chunking might be important in many processes of perception, learning and cognition in humans and animals. We summarize here the major sources of evidence for chunking mechanisms, and consider how such mechanisms have been implemented in computational models of the learning process. We distinguish two forms of chunking: the first deliberate, under strategic control, and goal-oriented; the second automatic, continuous, and linked to perceptual processes. Recent work with discrimination-network (...) computational models of long- and short-term memory (EPAM/CHREST) has produced a diverse range of applications of perceptual chunking. We focus on recent successes in verbal learning, expert memory, language acquisition and learning multiple representations, to illustrate the implementation and use of chunking mechanisms within contemporary models of human learning. (shrink)
Computational models of learning provide an alternative technique for identifying the number and type of chunks used by a subject in a specific task. Results from applying CHREST to chess expertise support the theoretical framework of Cowan and a limit in visual short-term memory capacity of 3–4 looms. An application to learning from diagrams illustrates different identifiable forms of chunk.
Newell argued that progress in psychology was slow because research focused on experiments trying to answer binary questions, such as serial versus parallel processing. In addition, not enough attention was paid to the strategies used by participants, and there was a lack of theories implemented as computer models offering sufficient precision for being tested rigorously. He proposed a three-headed research program: to develop computational models able to carry out the task they aimed to explain; to study one complex task in (...) detail, such as chess; and to build computational models that can account for multiple tasks. This article assesses the extent to which the papers in this issue advance Newell's program. While half of the papers devote much attention to strategies, several papers still average across them, a capital sin according to Newell. The three courses of action he proposed were not popular in these papers: Only two papers used computational models, with no model being both able to carry out the task and to account for human data; there was no systematic analysis of a specific video game; and no paper proposed a computational model accounting for human data in several tasks. It is concluded that, while they use sophisticated methods of analysis and discuss interesting results, overall these papers contribute only little to Newell's program of research. In this respect, they reflect the current state of psychology and cognitive science. This is a shame, as Newell's ideas might help address the current crisis of lack of replication and fraud in psychology. (shrink)
We discuss the relation of the Theory of Event Coding (TEC) to a computational model of expert perception, CHREST, based on the chunking theory. TEC's status as a verbal theory leaves several questions unanswerable, such as the precise nature of internal representations used, or the degree of learning required to obtain a particular level of competence: CHREST may help answer such questions.