David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Cognitive Science 36 (2):333-358 (2012)
Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 state–action pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model
|Keywords||Adaptive behavior Reinforcement learning Strategy selection Expected utility Skill acquisition and learning Choice Expected value Cognitive architecture|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook & Rajesh P. N. Rao (1997). Deictic Codes for the Embodiment of Cognition. Behavioral and Brain Sciences 20 (4):723-742.
Paul M. Fitts (1954). The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement. Journal of Experimental Psychology 47 (6):381.
Matthew M. Botvinick, Yael Niv & Andrew C. Barto (2009). Hierarchically Organized Behavior and its Neural Foundations: A Reinforcement Learning Perspective. Cognition 113 (3):262-280.
Wayne D. Gray & Wai‐Tat Fu (2004). Soft Constraints in Interactive Behavior: The Case of Ignoring Perfect Knowledge in‐the‐World for Imperfect Knowledge in‐the‐Head*,*. Cognitive Science 28 (3):359-382.
Citations of this work BETA
Andrew Howes, Geoffrey B. Duggan, Kiran Kalidindi, Yuan‐Chi Tseng & Richard L. Lewis (2015). Predicting Short‐Term Remembering as Boundedly Optimal Strategy Choice. Cognitive Science 40 (1).
Similar books and articles
Vladislav D. Veksler, Wayne D. Gray & Michael J. Schoelles (2013). Goal‐Proximity Decision‐Making. Cognitive Science 37 (4):757-774.
Reiko Yakushijin & Robert A. Jacobs (2011). Are People Successful at Learning Sequences of Actions on a Perceptual Matching Task? Cognitive Science 35 (5):939-962.
Nicholas Shea (2014). Reward Prediction Error Signals Are Meta‐Representational. Noûs 48 (2):314-341.
Cailin O'Connor (2013). The Evolution of Vagueness. Erkenntnis (S4):1-21.
Irving Kupfermann (2000). Reward: Wanted – a Better Definition. Behavioral and Brain Sciences 23 (2):208-208.
Arthur Markman, W. Maddox & G. C. Baldwin (2007). Using Regulatory Focus to Explore Implicit and Explicit Processing in Concept Learning. Journal of Consciousness Studies 14 (s 9-10):132-155.
Varsha Singh & Azizuddin Khan (2009). Heterogeneity in Choices on Iowa Gambling Task: Preference for Infrequent–High Magnitude Punishment. [REVIEW] Mind and Society 8 (1):43-57.
Added to index2012-01-19
Total downloads17 ( #209,206 of 1,792,926 )
Recent downloads (6 months)11 ( #73,105 of 1,792,926 )
How can I increase my downloads?