Tested the 2-process theory of detection, search, and attention presented by the current authors in a series of experiments. The studies demonstrate the qualitative difference between 2 modes of information processing: automatic detection and controlled search; trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning of categories is (...) shown to improve controlled search performance. A general framework for human information processing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (shrink)
Previous research shows that people can use the co-occurrence of words and objects in ambiguous situations (i.e., containing multiple words and objects) to learn word meanings during a brief passive training period (Yu & Smith, 2007). However, learners in the world are not completely passive but can affect how their environment is structured by moving their heads, eyes, and even objects. These actions can indicate attention to a language teacher, who may then be more likely to name the attended objects. (...) Using a novel active learning paradigm in which learners choose which four objects they would like to see named on each successive trial, this study asks whether active learning is superior to passive learning in a cross-situational word learning context. Finding that learners perform better in active learning, we investigate the strategies and discover that most learners use immediate repetition to disambiguate pairings. Unexpectedly, we find that learners who repeat only one pair per trial—an easy way to infer this pair—perform worse than those who repeat multiple pairs per trial. Using a working memory extension to an associative model of word learning with uncertainty and familiarity biases, we investigate individual differences that correlate with these assorted strategies. (shrink)
Models of recognition memory have traditionally struggled with the puzzle of criterion setting, a problem that is particularly acute in cases in which items for study and test are of widely varying types, with differing degrees of baseline familiarity and experience (e.g., words vs. random dot patterns). We present a dynamic model of the recognition process that addresses the criterion setting problem and produces joint predictions for choice and reaction time. In this model, recognition decisions are based not on the (...) absolute value of familiarity, but on how familiarity changes over time as features are sampled from the test item. Decisions are the outcome of a race between two parallel accumulators: one that accumulates positive changes in familiarity (leading to an ‘‘old’’ decision) and another that accumulates negative changes (leading to a ‘‘new’’ decision). Simulations with this model make realistic predictions for recognition performance and latency regardless of the baseline familiarity of study and test items. (shrink)
This commentary gives a personal perspective on modeling and modeling developments in cognitive science, starting in the 1950s, but focusing on the author’s personal views of modeling since training in the late 1960s, and particularly focusing on advances since the official founding of the Cognitive Science Society. The range and variety of modeling approaches in use today are remarkable, and for many, bewildering. Yet to come to anything approaching adequate insights into the infinitely complex fields of mind, brain, and intelligent (...) systems, an extremely wide array of modeling approaches is vital and necessary. (shrink)
When constrained by limited resources, how do we choose axioms of rationality? The target article relies on Bayesian reasoning that encounter serioustractabilityproblems. We propose another axiomatic foundation: quantum probability theory, which provides for less complex and more comprehensive descriptions. More generally, defining rationality in terms of axiomatic systems misses a key issue: rationality must be defined by humans facing vague information.
We argue that an approach that treats short-term memory as activated long-term memory is not inherently in conflict with information recycling in a limited-capacity or working-memory store, or with long-term storage based on the processing in such a store. Language differences aside, real model differences can only be assessed when the contrasting models are formulated precisely.
Colman shows that normative theories of rational decision-making fail to produce rational decisions in simple interactive games. I suggest that well-formed theories are possible in local settings, keeping in mind that a good part of each game is the generation of a rational approach appropriate for that game. The key is rationality defined in terms of the game, not individual decisions.