強化学習を用いた自律移動型ロボットの行動計画法の提案

Transactions of the Japanese Society for Artificial Intelligence 16:501-509 (2001)
  Copy   BIBTEX

Abstract

In a previous paper, we proposed a solution to navigation of a mobile robot. In our approach, we formulated the following two problems at each time step as discrete optimization problems: 1) estimation of position and direction of a robot, and 2)action decision. While the results of our simulation showed the effectiveness of our approach, the values of weights in the objective functions were given by a heuristic method. This paper presents a theoretical method using reinforcement learning for adjusting the weight parameters in the objective function that includes pieces of heuristic knowledge on the action decision. In our reinforcement learning, the expectation of a reward given to a robot’s trajectory is defined as the value function to maximize. The robot’s trajectories are generated stochastically because we used a probabilistic policy for determining actions of a robot to search for the global optimal trajectory. However, this decision process is not a Markov decision process because the objective function includes an action at the previous time. Thus, Q-learning, which is a conventional method of reinforcement learning, cannot be applied to this problem. In this paper, we applied Williams’s episodic REINFORCE approach to the action decision and derived a learning rule for the weight parameters of the objective function. Moreover, we applied the stochastic hill-climbing method to maximizing the value function to reduce computation time. The learning rule was verified by our experiment.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,931

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2014-03-25

Downloads
18 (#858,167)

6 months
1 (#1,512,999)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references