The Einstellung effect occurs when the first idea that comes to mind, triggered by familiar features of a problem, prevents a better solution being found. It has been shown to affect both people facing novel problems and experts within their field of expertise. We show that it works by influencing mechanisms that determine what information is attended to. Having found one solution, expert chess players reported that they were looking for a better one. But their eye movements showed that they (...) continued to look at features of the problem related to the solution they had already thought of. The mechanism which allows the first schema activated by familiar aspects of a problem to control the subsequent direction of attention may contribute to a wide range of biases both in everyday and expert thought - from confirmation bias in hypothesis testing to the tendency of scientists to ignore results that do not fit their favoured theories. (shrink)
Several authors have hailed intuition as one of the defining features of expertise. In particular, while disagreeing on almost anything that touches on human cognition and artificial intelligence, Hubert Dreyfus and Herbert Simon agreed on this point. However, the highly influential theories of intuition they proposed differed in major ways, especially with respect to the role given to search and as to whether intuition is holistic or analytic. Both theories suffer from empirical weaknesses. In this paper, we show how, with (...) some additions, a recent theory of expert memory (the template theory) offers a coherent and wide-ranging explanation of intuition in expert behaviour. It is shown that the theory accounts for the key features of intuition: it explains the rapid onset of intuition and its perceptual nature, provides mechanisms for learning, incorporates processes showing how perception is linked to action and emotion, and how experts capture the entirety of a situation. In doing so, the new theory addresses the issues problematic for Dreyfus’s and Simon’s theories. Implications for research and practice are discussed. (shrink)
For many years, the game of chess has provided an invaluable task environment for research on cognition, in particular on the differences between novices and experts and the learning that removes these differences, and upon the structure of human memory and its paramaters. The template theory presented by Gobet and Simon based on the EPAM theory offers precise predictions on cognitive processes during the presentation and recall of chess positions. This article describes the behavior of CHREST, a computer implementation of (...) the template theory, in a memory task when the presentation time is varied from one second to sixty, on the recall of game and random positions, and compares the model to human data. Strong players are better than weak players in both types of positions, especially with long presentation times, but even after brief presentations. CHREST predicts the data, both qualitatively and quantitatively. Strong players' superiority with random positions is explained by the large number of chunks they hold in LTM. Their excellent recall with short presentation times is explained by templates, a special class of chunks. CHREST is compared to other theories of chess skill, which either cannot account for the superiority of Masters in random positions or predict too strong a performance of Masters in such positions. (shrink)
In this study, we apply MOSAIC (model of syntax acquisition in children) to the simulation of the developmental patterning of children's optional infinitive (OI) errors in 4 languages: English, Dutch, German, and Spanish. MOSAIC, which has already simulated this phenomenon in Dutch and English, now implements a learning mechanism that better reflects the theoretical assumptions underlying it, as well as a chunking mechanism that results in frequent phrases being treated as 1 unit. Using 1, identical model that learns from child‐directed (...) speech, we obtain a close quantitative fit to the data from all 4 languages despite there being considerable cross‐linguistic and developmental variation in the OI phenomenon. MOSAIC successfully simulates the difference between Spanish (a pro‐drop language in which OI errors are virtually absent) and obligatory subject languages that do display the OI phenomenon. It also highlights differences in the OI phenomenon across German and Dutch, 2 closely related languages whose grammar is virtually identical with respect to the relation between finiteness and verb placement. Taken together, these results suggest that (a) cross‐linguistic differences in the rates at which children produce OIs are graded, quantitative differences that closely reflect the statistical properties of the input they are exposed to and (b) theories of syntax acquisition need to consider more closely the role of input characteristics as determinants of quantitative differences in the cross‐linguistic patterning of phenomena in language acquisition. (shrink)
Expert chess players, specialized in different openings, recalled positions and solved problems within and outside their area of specialization. While their general expertise was at a similar level, players performed better with stimuli from their area of specialization. The effect of specialization on both recall and problem solving was strong enough to override general expertise—players remembering positions and solving problems from their area of specialization performed at around the level of players 1 standard deviation (SD) above them in general skill. (...) Their problem‐solving strategy also changed depending on whether the problem was within their area of specialization. When it was, they searched more in depth and less in breadth; with problems outside their area of specialization, the reverse. The knowledge that comes from familiarity with a problem area is more important than general purpose strategies in determining how an expert will tackle it. These results demonstrate the link in experts between problem solving and memory of specific experiences and indicate that the search for context‐independent general purpose problem‐solving strategies to teach to future experts is unlikely to be successful. (shrink)
In several papers, Hubert Dreyfus has used chess as a paradigmatic example of how experts act intuitively, rarely using deliberation when selecting actions, while individuals that are only competent rely on analytic and deliberative thought. By contrast, Montero and Evans (Phenomenology and the Cognitive Sciences 10:175–194, 2011 ) argue that intuitive aspects of chess are actually rational, in the sense that actions can be justified. In this paper, I show that both Dreyfus’s and Montero and Evans’s views are too extreme, (...) and that expertise in chess, and presumably in other domains, depends on a combination of intuitive thinking and deliberative search, both mediated by perceptual processes. There is more to expertise than just rational thought. I further contend that both sides ignore emotions, which are important in acquiring and maintaining expertise. Finally, I argue that experimental data and first-person data, which are sometimes presented as irreconcilable in the phenomenology literature, actually lead to similar conclusions. (shrink)
In this study we use a computational model of language learning called model of syntax acquisition in children (MOSAIC) to investigate the extent to which the optional infinitive (OI) phenomenon in Dutch and English can be explained in terms of a resource-limited distributional analysis of Dutch and English child-directed speech. The results show that the same version of MOSAIC is able to simulate changes in the pattern of finiteness marking in 2 children learning Dutch and 2 children learning English as (...) the average length of their utterances increases. These results suggest that it is possible to explain the key features of the OI phenomenon in both Dutch and English in terms of the interaction between an utterance-final bias in learning and the distributional characteristics of child-directed speech in the 2 languages. They also show how computational modeling techniques can be used to investigate the extent to which cross-linguistic similarities in the developmental data can be explained in terms of common processing constraints as opposed to innate knowledge of universal grammar. (shrink)
Understanding how look-ahead search and pattern recognition interact is one of the important research questions in the study of expert problem solving. This paper examines the implications of the template theory Gobet & Simon, 1996a , a recent theory of expert memory, on the theory of problem solving in chess. Templates are chunks Chase & Simon, 1973 that have evolved into more complex data structures and that possess slots allowing values to be encoded rapidly. Templates may facilitate search in three (...) ways: a by allowing information to be stored into LTM rapidly; b by allowing a search in the template space in addition to a search in the move space; and c by compensating loss in the minds eye due to interference and decay. A computer model implementing the main ideas of the theory is presented, and simulations of its search behaviour are discussed. The template theory accounts for the slight skill difference in average depth of search found in chess players, as well as for other empirical data. (shrink)
In a famous study of expert problem solving, de Groot (1946/1978) examined how chess players found the best move. He reported that there was little difference in the way that the best players (Grand Masters) and very good players (Candidate Masters) searched the board. Although this result has been regularly cited in studies of expertise, it is frequently misquoted. It is often claimed that de Groot found no difference in the way that experts and novices investigate a problem. Comparison of (...) expert and novice chess players on de Groot's problem shows that there are clear differences in their search patterns. We discuss the troublesome theoretical and practical consequences of incorrectly reporting de Groot's findings. (shrink)
What is ‘counterintuitive’? There is general agreement that it refers to a violation of previously held knowledge, but the precise definition seems to vary with every author and study. The aim of this paper is to deconstruct the notion of ‘counterintuitive’ and provide a more philosophically rigorous definition congruent with the history of psychology, recent experimental work in ‘minimally counterintuitive’ concepts, the science vs. religion debate, and the developmental and evolutionary background of human beings. We conclude that previous definitions of (...) counterintuitiveness have been flawed and did not resolve the conflict between a believer’s conception of the supernatural entity (an atypical “real kind”) and the non-believer’s conception (empty name/fictional). Furthermore, too much emphasis has been placed on the universality and (presumed) innateness of intuitive concepts (and hence the criteria for what is counterintuitive)—and far too little attention paid to learning and expertise. We argue that many putatively universal concepts are not innate, but mostly learned and defeasible—part of a religious believer’s repertoire of expert knowledge. Nonetheless, the results from empirical studies about the memorability of counterintuitive concepts have been convincing and it is difficult to improve on existing designs and methodologies. However, future studies in counterintuitive concepts need to embed their work in research about context effects, typicality, the psychology of learning and expertise (for example, the formation of expert templates and range defaults), with more attention to the sources of knowledge (direct and indirect knowledge) and a better idea of what ‘default’ knowledge really is. (shrink)
In this commentary, we discuss an important pattern of results in the literature on the neural basis of expertise: decrease of cerebral activation at the beginning of acquisition of expertise and functional cerebral reorganization as a consequence of years of practice. We show how these two results can be integrated with the neural reuse framework.
The relation between mind and brain is one of the big scientific questions that has attracted scientists’ attention for centuries but also eluded their understanding. In this book, William Uttal provides a critical review of cognitive neuroscience, focusing on a specific question: What do the brain-imaging techniques developed in the last two decades or so—mostly functional magnetic resonance imaging and positron emission tomography —tell us about the brain-mind problem? His unambiguous and abrasive answer is: nothing.The book is organized in nine (...) chapters. The introductory chapter provides historical, methodological, and philosophical background. Importantly, it highlights a shift in the way neuroscientists think about modularity and localization. Traditionally, researchers using brain imaging have tended to subscribe to a strong view of modularity and localization, where distinct cognitive modules are assumed to be localized in well-defined regions of the brain. In the l .. (shrink)
The original book chapter does not have an abstract. However, I have written an abstract for this repository: Religious life encompasses a wide diversity of situations for which the emotional tone is on a continuum from extreme euphoria to extreme dysphoria. In this book chapter, we propose the novel hypothesis that euphoria and dysphoria have distinctly separate functional consequences for religious evolution and survivability. This is due to the differential cognitive states that are created in euphoric and dysphoric situations. Based (...) on readings from religious studies and cognitive psychology, we propose that euphoria in religion is conducive to social bonding and situations needing lateral thinking and creativity; whereas dysphoria in religion is conducive to situations where precision and analogical reasoning are necessary. (shrink)
Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long‐term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long‐term knowledge and WM capacity to determine the relative contributions of each in explaining the developmental data. The simulations show that (a) both mechanisms independently cause the same overall developmental changes in NWR performance, (b) increase in long‐term knowledge provides (...) the better fit to the child data, and (c) varying both long‐term knowledge and WM capacity adds no significant gains over varying long‐term knowledge alone. Given that increases in long‐term knowledge must occur during development, the results indicate that increases in WM capacity may not be required to explain developmental differences. An increase in WM capacity should only be cited as a mechanism of developmental change when there are clear empirical reasons for doing so. (shrink)
Linhares and Brum (2007) argue that they provide evidence for analogy as the main principle behind experts’ acquisition of perceptual knowledge. However, the methodology they used—asking players to pair positions using abstract similarity—raises the possibility that the task reflects more the effect of directional instructions than the principles underlying the acquisition of knowledge. Here we replicate and extend Linhares and Brum’s experiment and show that the matching task they used is inadequate for drawing any conclusions about the nature of experts’ (...) perception. When expert chess players were instructed to match problems based on similarities at the abstract level (analogy), they produced more abstract pairs than pairs based on concrete similarity. However, the same experts produced more concrete pairs than abstract ones when instructed to match the problems based on concrete similarity. Asking experts to match problems using explicit instructions is not an appropriate way to show the importance of either analogy or similarity in the acquisition of expert knowledge. Experts simply do what they are told to do. (shrink)
Computational models of learning provide an alternative technique for identifying the number and type of chunks used by a subject in a specific task. Results from applying CHREST to chess expertise support the theoretical framework of Cowan and a limit in visual short-term memory capacity of 3–4 looms. An application to learning from diagrams illustrates different identifiable forms of chunk.
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming (GP). Our approach evolves from experimental data cognitive theories that explain “the mental (...) program” that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories. (shrink)
Pioneering work in the 1940s and 1950s suggested that the concept of chunking might be important in many processes of perception, learning and cognition in humans and animals. We summarize here the major sources of evidence for chunking mechanisms, and consider how such mechanisms have been implemented in computational models of the learning process. We distinguish two forms of chunking: the first deliberate, under strategic control, and goal-oriented; the second automatic, continuous, and linked to perceptual processes. Recent work with discrimination-network (...) computational models of long- and short-term memory (EPAM/CHREST) has produced a diverse range of applications of perceptual chunking. We focus on recent successes in verbal learning, expert memory, language acquisition and learning multiple representations, to illustrate the implementation and use of chunking mechanisms within contemporary models of human learning. (shrink)
This volume offers selected papers exploring issues arising from scientific discovery in the social sciences. It features a range of disciplines including behavioural sciences, computer science, finance, and statistics with an emphasis on philosophy. The first of the three parts examines methods of social scientific discovery. Chapters investigate the nature of causal analysis, philosophical issues around scale development in behavioural science research, imagination in social scientific practice, and relationships between paradigms of inquiry and scientific fraud. The next part considers the (...) practice of social science discovery. Chapters discuss the lack of genuine scientific discovery in finance where hypotheses concern the cheapness of securities, the logic of scientific discovery in macroeconomics, and the nature of that what discovery with the Solidarity movement as a case study. The final part covers formalising theories in social science. Chapters analyse the abstract model theory of institutions as a way of representing the structure of scientific theories, the semi-automatic generation of cognitive science theories, and computational process models in the social sciences. The volume offers a unique perspective on scientific discovery in the social sciences. It will engage scholars and students with a multidisciplinary interest in the philosophy of science and social science. (shrink)
Not only has knowledge been a central topic in philosophy, at least since Greek antiquity, but in recent years, it has been a prominent issue in the study of expertise. An important aspect of education is transmission of knowledge. This chapter discusses three views of expertise that have something important to say about the philosophical issues. It first briefly reviews the issue of defining and identifying expertise and the philosophical debate around knowing‐how and knowing‐that. After presenting the key assumptions made (...) by the three views on expertise, the chapter compares them along six philosophical dimensions: rationality, knowledge, intuition, introspection, deliberation and artificial intelligence. It focuses on performance‐based expertise (p‐expertise), drawing examples mostly from chess, where an objective and widely‐used measure of skill is available. Traditionally, philosophers have adopted a position known as intellectualism: explicit knowledge (aka knowing‐that) is the primary form of knowledge, and tacit knowledge (knowing‐how) derives from it. (shrink)
Newell argued that progress in psychology was slow because research focused on experiments trying to answer binary questions, such as serial versus parallel processing. In addition, not enough attention was paid to the strategies used by participants, and there was a lack of theories implemented as computer models offering sufficient precision for being tested rigorously. He proposed a three-headed research program: to develop computational models able to carry out the task they aimed to explain; to study one complex task in (...) detail, such as chess; and to build computational models that can account for multiple tasks. This article assesses the extent to which the papers in this issue advance Newell's program. While half of the papers devote much attention to strategies, several papers still average across them, a capital sin according to Newell. The three courses of action he proposed were not popular in these papers: Only two papers used computational models, with no model being both able to carry out the task and to account for human data; there was no systematic analysis of a specific video game; and no paper proposed a computational model accounting for human data in several tasks. It is concluded that, while they use sophisticated methods of analysis and discuss interesting results, overall these papers contribute only little to Newell's program of research. In this respect, they reflect the current state of psychology and cognitive science. This is a shame, as Newell's ideas might help address the current crisis of lack of replication and fraud in psychology. (shrink)
We discuss the relation of the Theory of Event Coding (TEC) to a computational model of expert perception, CHREST, based on the chunking theory. TEC's status as a verbal theory leaves several questions unanswerable, such as the precise nature of internal representations used, or the degree of learning required to obtain a particular level of competence: CHREST may help answer such questions.