Novelty and Inductive Generalization in Human Reinforcement Learning

Topics in Cognitive Science 7 (3):391-415 (2015)
  Copy   BIBTEX

Abstract

In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of RL in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional RL algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The complexity of learning SUBSEQ(A).Stephen Fenner, William Gasarch & Brian Postow - 2009 - Journal of Symbolic Logic 74 (3):939-975.

Analytics

Added to PP
2015-03-23

Downloads
63 (#250,762)

6 months
6 (#522,885)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

Add more citations