David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
This paper addresses weighting and partitioning in complex reinforcement learning tasks, with the aim of facilitating learning. The paper presents some ideas regarding weighting of multiple agents and extends them into partitioning an input/state space into multiple regions with di erential weighting in these regions, to exploit di erential characteristics of regions and di erential characteristics of agents to reduce the learning complexity of agents (and their function approximators) and thus to facilitate the learning overall. It analyzes, in reinforcement learning tasks, di erent ways of partitioning a task and using agents selectively based on partitioning. Based on the analysis, some heuristic methods are described and experimentally tested. We nd that some o -line heuristic methods performed the best, signi cantly better than single-agent models
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
|Through your library||Only published papers are available at libraries|
Similar books and articles
Karl Tuyls, Ann Nowe, Tom Lenaerts & Bernard Manderick (2004). An Evolutionary Game Theoretic Perspective on Learning in Multi-Agent Systems. Synthese 139 (2):297 - 330.
Ron Sun, Supplementing Neural Reinforcement Learning with Symbolic Methods Possibilities and Challenges.
Ron Sun, Todd Peterson & Edward Merrill, Bottom-Up Skill Learning in Reactive Sequential Decision Tasks.
Roland Mühlenbernd (2011). Learning with Neighbours. Synthese 183 (S1):87-109.
Pierre Barbaroux & Gilles Enée (2005). Spontaneous Coordination and Evolutionary Learning Processes in an Agent-Based Model. Mind and Society 4 (2):179-195.
Added to index2010-12-22
Total downloads3 ( #220,397 of 1,004,529 )
Recent downloads (6 months)1 ( #64,617 of 1,004,529 )
How can I increase my downloads?