David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
We investigate a simple stochastic model of social network formation by the process of reinforcement learning with discounting of the past. In the limit, for any value of the discounting parameter, small, stable cliques are formed. However, the time it takes to reach the limiting state in which cliques have formed is very sensitive to the discounting parameter. Depending on this value, the limiting result may or may not be a good predictor for realistic observation times.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
A. Tsoularis (2007). A Learning Strategy for Predator Preying on Edible and Inedible Prey. Acta Biotheoretica 55 (3):283-295.
Ron Sun, Supplementing Neural Reinforcement Learning with Symbolic Methods Possibilities and Challenges.
Ron Sun, Beyond Simple Rule Extraction: The Extraction of Planning Knowledge From Reinforcement Learners.
Michele Bernasconi & Matteo Galizzi (2010). Network Formation in Repeated Interactions: Experimental Evidence on Dynamic Behaviour. [REVIEW] Mind and Society 9 (2):193-228.
Mikhail N. Zhadin (2000). LTP and Reinforcement: Possible Role of the Monoaminergic Systems. Behavioral and Brain Sciences 23 (2):287-288.
Added to index2009-12-05
Total downloads8 ( #276,615 of 1,726,564 )
Recent downloads (6 months)2 ( #289,822 of 1,726,564 )
How can I increase my downloads?