Most models of response time (RT) in elementary cognitive tasks implicitly assume that the speed-accuracy trade-off is continuous: When payoffs or instructions gradually increase the level of speed stress, people are assumed to gradually sacrifice response accuracy in exchange for gradual increases in response speed. This trade-off presumably operates over the entire range from accurate but slow responding to fast but chance-level responding (i.e., guessing). In this article, we challenge the assumption of continuity and propose a phase transition model for (...) RTs and accuracy. Analogous to the fast guess model (Ollman, 1966), our model postulates two modes of processing: a guess mode and a stimulus-controlled mode. From catastrophe theory, we derive two important predictions that allow us to test our model against the fast guess model and against the popular class of sequential sampling models. The first prediction—hysteresis in the transitions between guessing and stimulus-controlled behavior—was confirmed in an experiment that gradually changed the reward for speed versus accuracy. The second prediction—bimodal RT distributions—was confirmed in an experiment that required participants to respond in a way that is intermediate between guessing and accurate responding. (shrink)
After more than 15 years of study, the 1/f noise or complex-systems approach to cognitive science has delivered promises of progress, colorful verbiage, and statistical analyses of phenomena whose relevance for cognition remains unclear. What the complex-systems approach has arguably failed to deliver are concrete insights about how people perceive, think, decide, and act. Without formal models that implement the proposed abstract concepts, the complex-systems approach to cognitive science runs the danger of becoming a philosophical exercise in futility. The complex-systems (...) approach can be informative and innovative, but only if it is implemented as a formal model that allows concrete prediction, falsification, and comparison against more traditional approaches. (shrink)
Jones & Love (J&L) suggest that Bayesian approaches to the explanation of human behavior should be constrained by mechanistic theories. We argue that their proposal misconstrues the relation between process models, such as the Bayesian model, and mechanisms. While mechanistic theories can answer specific issues that arise from the study of processes, one cannot expect them to provide constraints in general.
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates—sometimes error rates increase with the number of choice alternatives, and sometimes they are constant. We provide evidence from two experiments that error rates are mostly independent of the number of choice alternatives, unless context effects induce participants to trade speed for accuracy (...) across conditions. Error rate data have previously been used to discriminate between competing theoretical accounts of Hick's Law, and our results question the validity of those conclusions. We show that a previously dismissed optimal observer model might provide a parsimonious account of both response time and error rate data. The model suggests that people approximate Bayesian inference in multi-alternative choice, except for some perceptual limitations. (shrink)
The probabilistic approach to human reasoning is exemplified by the information gain model for the Wason card selection task. Although the model is elegant and original, several key aspects of the model warrant further discussion, particularly those concerning the scope of the task and the choice process of individuals.