David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
This paper addresses the logical foundations of goal-regression planning in autonomous rational agents. It focuses mainly on three problems. The first is that goals and subgoals will often be conjunctions, and to apply goal-regression planning to a conjunction we usually have to plan separately for the conjuncts and then combine the resulting subplans. A logical problem arises from the fact that the subplans may destructively interfere with each other. This problem has been partially solved in the AI literature (e.g., in SNLP and UCPOP), but the solutions proposed there work only when a restrictive assumption is satisfied. This assumption pertains to the computability of threats. It is argued that this assumption may fail for an autonomous rational agent operating in a complex environment. Relaxing this assumption leads to a theory of defeasible planning. The theory is formulated precisely and an implementation in the OSCAR architecture is discussed.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
Keith Stenning & Michiel Lambalgen (2005). Semantic Interpretation as Computation in Nonmonotonic Logic: The Real Meaning of the Suppression Task. Cognitive Science 29 (6):919-960.
Similar books and articles
John L. Pollock (1999). Rational Cognition in Oscar. Agent Theories.
Michael E. Bratman (1997). Responsibility and Planning. Journal of Ethics 1 (1):27-43.
Added to index2009-01-28
Total downloads6 ( #214,156 of 1,102,035 )
Recent downloads (6 months)1 ( #306,606 of 1,102,035 )
How can I increase my downloads?