Most models of response time (RT) in elementary cognitive tasks implicitly assume that the speed-accuracy trade-off is continuous: When payoffs or instructions gradually increase the level of speed stress, people are assumed to gradually sacrifice response accuracy in exchange for gradual increases in response speed. This trade-off presumably operates over the entire range from accurate but slow responding to fast but chance-level responding (i.e., guessing). In this article, we challenge the assumption of continuity and propose a phase transition model for (...) RTs and accuracy. Analogous to the fast guess model (Ollman, 1966), our model postulates two modes of processing: a guess mode and a stimulus-controlled mode. From catastrophe theory, we derive two important predictions that allow us to test our model against the fast guess model and against the popular class of sequential sampling models. The first prediction—hysteresis in the transitions between guessing and stimulus-controlled behavior—was confirmed in an experiment that gradually changed the reward for speed versus accuracy. The second prediction—bimodal RT distributions—was confirmed in an experiment that required participants to respond in a way that is intermediate between guessing and accurate responding. (shrink)
The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the null against (...) a well-specified alternative. We show how this requirement follows from the basic tenets of conventional and Bayesian probability. Moreover, we show in both the conventional and Bayesian framework that not specifying the alternative may lead to rejections of the null hypothesis with scant evidence. We review both frequentist and Bayesian approaches to specifying alternatives, and we show how such specifications improve inference. The field of cognitive science will benefit because consideration of reasonable alternatives will undoubtedly sharpen the intellectual underpinnings of research. (shrink)
After more than 15 years of study, the 1/f noise or complex-systems approach to cognitive science has delivered promises of progress, colorful verbiage, and statistical analyses of phenomena whose relevance for cognition remains unclear. What the complex-systems approach has arguably failed to deliver are concrete insights about how people perceive, think, decide, and act. Without formal models that implement the proposed abstract concepts, the complex-systems approach to cognitive science runs the danger of becoming a philosophical exercise in futility. The complex-systems (...) approach can be informative and innovative, but only if it is implemented as a formal model that allows concrete prediction, falsification, and comparison against more traditional approaches. (shrink)
Jones & Love (J&L) suggest that Bayesian approaches to the explanation of human behavior should be constrained by mechanistic theories. We argue that their proposal misconstrues the relation between process models, such as the Bayesian model, and mechanisms. While mechanistic theories can answer specific issues that arise from the study of processes, one cannot expect them to provide constraints in general.
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates—sometimes error rates increase with the number of choice alternatives, and sometimes they are constant. We provide evidence from two experiments that error rates are mostly independent of the number of choice alternatives, unless context effects induce participants to trade speed for accuracy (...) across conditions. Error rate data have previously been used to discriminate between competing theoretical accounts of Hick's Law, and our results question the validity of those conclusions. We show that a previously dismissed optimal observer model might provide a parsimonious account of both response time and error rate data. The model suggests that people approximate Bayesian inference in multi-alternative choice, except for some perceptual limitations. (shrink)
The probabilistic approach to human reasoning is exemplified by the information gain model for the Wason card selection task. Although the model is elegant and original, several key aspects of the model warrant further discussion, particularly those concerning the scope of the task and the choice process of individuals.
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)