Beyond simple rule extraction: The extraction of planning knowledge from reinforcement learners

Abstract
Abstra,ct— This paper will discuss learning in hybrid models that goes beyond simple rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, to further develop hybrid learning models that include both symbolic and subsymbolic knowledge and that learn autonomously, it is necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper will describe knowledge extraction from neural reinforcement learning. It includes two approaches towards extracting plan knowledge: the extraction of explicit, symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to the creation of a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Translate to english
Revision history

Download options

Our Archive


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 34,373
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Analytics

Added to PP index
2009-06-13

Total downloads
6 ( #649,604 of 2,266,758 )

Recent downloads (6 months)
1 ( #372,614 of 2,266,758 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature