The stop-signal paradigm is frequently used to study response inhibition. In this paradigm, participants perform a two-choice response time task where the primary task is occasionally interrupted by a stop-signal that prompts participants to withhold their response. The primary goal is to estimate the latency of the unobservable stop response (stop signal reaction time or SSRT). Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (in press) have developed a Bayesian parametric approach that allows for the estimation of the entire distribution of (...) SSRTs. The Bayesian parametric approach assumes that SSRTs are ex-Gaussian distributed and uses Markov chain Monte Carlo sampling to estimate the parameters of the SSRT distri- bution. Here we present an efficient and user-friendly software implementa- tion of the Bayesian parametric approach —BEESTS— that can be applied to individual as well as hierarchical stop-signal data. BEESTS comes with an easy-to-use graphical user interface and provides users with summary statistics of the posterior distribution of the parameters as well various diag- nostic tools to assess the quality of the parameter estimates. The software is open source and runs on Windows and OS X operating systems. In sum, BEESTS allows experimental and clinical psychologists to estimate entire distributions of SSRTs and hence facilitates the more rigorous analysis of stop-signal data. (shrink)
Decision-making deficits in clinical populations are often assessed with the Iowa gambling task (IGT). Performance on this task is driven by latent psychological processes, the assessment of which requires an analysis using cognitive models. Two popular examples of such models are the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models. These models have recently been subjected to sophisticated procedures of model checking, spawning a hybrid version of the EV and PVL models—the PVL-Delta model. In order to test the validity (...) of the PVL-Delta model we present a parameter space partitioning (PSP) study and a test of selective influence. The PSP study allows one to assess the choice patterns that the PVL-Delta model generates across its entire parameter space. The PSP study revealed that the model accounts for empirical choice patterns featuring a preference for the good decks or the decks with infrequent losses; however, the model fails to account for empirical choice patterns featuring a preference for the bad decks. The test of selective influence investigates the effectiveness of experimental manipulations designed to target only a single model parameter. This test showed that the manipulations were successful for all but one parameter. To conclude, despite a few shortcomings, the PVL-Delta model seems to be a better IGT model than the popular EV and PVL models. (shrink)
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates—sometimes error rates increase with the number of choice alternatives, and sometimes they are constant. We provide evidence from two experiments that error rates are mostly independent of the number of choice alternatives, unless context effects induce participants to trade speed for accuracy (...) across conditions. Error rate data have previously been used to discriminate between competing theoretical accounts of Hick's Law, and our results question the validity of those conclusions. We show that a previously dismissed optimal observer model might provide a parsimonious account of both response time and error rate data. The model suggests that people approximate Bayesian inference in multi-alternative choice, except for some perceptual limitations. (shrink)
After more than 15 years of study, the 1/f noise or complex-systems approach to cognitive science has delivered promises of progress, colorful verbiage, and statistical analyses of phenomena whose relevance for cognition remains unclear. What the complex-systems approach has arguably failed to deliver are concrete insights about how people perceive, think, decide, and act. Without formal models that implement the proposed abstract concepts, the complex-systems approach to cognitive science runs the danger of becoming a philosophical exercise in futility. The complex-systems (...) approach can be informative and innovative, but only if it is implemented as a formal model that allows concrete prediction, falsification, and comparison against more traditional approaches. (shrink)
Most models of response time (RT) in elementary cognitive tasks implicitly assume that the speed-accuracy trade-off is continuous: When payoffs or instructions gradually increase the level of speed stress, people are assumed to gradually sacrifice response accuracy in exchange for gradual increases in response speed. This trade-off presumably operates over the entire range from accurate but slow responding to fast but chance-level responding (i.e., guessing). In this article, we challenge the assumption of continuity and propose a phase transition model for (...) RTs and accuracy. Analogous to the fast guess model (Ollman, 1966), our model postulates two modes of processing: a guess mode and a stimulus-controlled mode. From catastrophe theory, we derive two important predictions that allow us to test our model against the fast guess model and against the popular class of sequential sampling models. The first prediction—hysteresis in the transitions between guessing and stimulus-controlled behavior—was confirmed in an experiment that gradually changed the reward for speed versus accuracy. The second prediction—bimodal RT distributions—was confirmed in an experiment that required participants to respond in a way that is intermediate between guessing and accurate responding. (shrink)
The probabilistic approach to human reasoning is exemplified by the information gain model for the Wason card selection task. Although the model is elegant and original, several key aspects of the model warrant further discussion, particularly those concerning the scope of the task and the choice process of individuals.