The human cognitive architecture consists of a set of largely independent modules associated with different brain regions. This book discusses in detail how these various modules can combine to produce behaviours as varied as driving a car and solving an algebraic equation.
Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel mathematical problems. We identify four states in the solution of these problems: Encoding, Planning, Solving, and Respond. The method allows us to interpret what participants are doing on individual (...) problem-solving trials. The duration of the planning state varies on a trial-to-trial basis with novelty of the problem. The duration of solution stage similarly varies with the amount of computation needed to produce a solution once a plan is devised. The response stage similarly varies with the complexity of the answer produced. In addition, we identified a number of effects that ran counter to a prior model of the task. Thus, we were able to decompose the overall problem-solving time into estimates of its components and in way that serves to guide theory. (shrink)
Newell proposed that cognitive theories be developed in an effort to satisfy multiple criteria and to avoid theoretical myopia. He provided two overlapping lists of 13 criteria that the human cognitive architecture would have to satisfy in order to be functional. We have distilled these into 12 criteria: flexible behavior, real-time performance, adaptive behavior, vast knowledge base, dynamic behavior, knowledge integration, natural language, learning, development, evolution, and brain realization. There would be greater theoretical progress if we evaluated theories by a (...) broad set of criteria such as these and attended to the weaknesses such evaluations revealed. To illustrate how theories can be evaluated we apply these criteria to both classical connectionism and the ACT-R theory. The strengths of classical connectionism on this test derive from its intense effort in addressing empirical phenomena in such domains as language and cognitive development. Its weaknesses derive from its failure to acknowledge a symbolic level to thought. In contrast, ACT-R includes both symbolic and subsymbolic components. The strengths of the ACT-R theory derive from its tight integration of the symbolic component with the subsymbolic component. Its weaknesses largely derive from its failure, as yet, to adequately engage in intensive analyses of issues related to certain criteria on Newell's list. Key Words: cognitive architecture; connectionism; hybrid systems; language; learning; symbolic systems. (shrink)
Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent to not just (...) judge its fit to the experimental data but also its simplicity and ability to make predictions. (shrink)
We present an account of processing capacity in the ACT-R theory. At the symbolic level, the number of chunks in the current goal provides a measure of relational complexity. At the subsymbolic level, limits on spreading activation, measured by the attentional parameter W, provide a theory of processing capacity, which has been applied to performance, learning, and individual differences data.
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools (...) and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. (shrink)