Q(st at):= (I — o')Q(st at) + o'(r(st+1)

Abstract
Straightforward reinforcement learning for multi-agent co-learning settings often results in poor outcomes. Meta-learning processes beyond straightforward reinforcement learning may be necessary to achieve good (or optimal) outcomes. Algorithmic processes of meta-learning, or "manipulation", will be described, which is a cognitively realistic and effective means for learning cooperation. We will discuss various "manipulation" routines that address the issue of improving multi-agent co-learning. We hope to develop better adaptive means of multi-agent cooperation, without requiring a priori knowledge, and advance multi-agent co-learning beyond existing theories and techniques
Keywords No keywords specified (fix it)
Categories No categories specified
(categorize this paper)
Options
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index Translate to english
 
Download options
PhilPapers Archive


Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 10,948
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA

No references found.

Citations of this work BETA

No citations found.

Similar books and articles
John Cantwell (2007). A Model for Updates in a Multi-Agent Setting. Journal of Applied Non-Classical Logics 17 (2):183-196.
Analytics

Monthly downloads

Sorry, there are not enough data points to plot this chart.

Added to index

2012-09-05

Total downloads

1 ( #437,916 of 1,100,851 )

Recent downloads (6 months)

1 ( #289,727 of 1,100,851 )

How can I increase my downloads?

My notes
Sign in to use this feature


Discussion
Start a new thread
Order:
There  are no threads in this forum
Nothing in this forum yet.