David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
forcement learning algorithms that generate only reactive policies and existing probabilistic planning algorithms that requires a substantial amount of a priori knowledge in order to plan we devise a two stage bottom up learning to plan process in which rst reinforcement learn ing dynamic programming is applied without the use of a priori domain speci c knowledge to acquire a reactive policy and then explicit plans are extracted from the learned reactive policy Plan extraction is based on a beam search algorithm that performs temporal projection in a restricted fashion guided by the value functions resulting from reinforcement learn ing dynamic programming..
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Edward Merrillb & Todd Petersonb (2001). From Implicit Skills to Explicit Knowledge: A Bottom‐Up Model of Skill Learning. Cognitive Science 25 (2):203-244.
Ron Sun, Supplementing Neural Reinforcement Learning with Symbolic Methods Possibilities and Challenges.
Ron Sun, Todd Peterson & Edward Merrill, Bottom-Up Skill Learning in Reactive Sequential Decision Tasks.
Ron Sun, Beyond Simple Rule Extraction: The Extraction of Planning Knowledge From Reinforcement Learners.
Added to index2009-06-13
Total downloads3 ( #427,878 of 1,696,590 )
Recent downloads (6 months)1 ( #345,974 of 1,696,590 )
How can I increase my downloads?