David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
The “grand problem” of AI has always been to build artificial agents of human-level intelligence, capable of operating in environments of real-world complexity. OSCAR is a cognitive architecture for such agents, implemented in LISP. OSCAR is based on my extensive work in philosophy concerning both epistemology and rational decision making. This paper provides a detailed overview of OSCAR. The main conclusions are that such agents must be capablew of operating against a background of pervasive ignorance, because the real world is too complex for them to know more than a small fraction of what is true. This is handled by giving the agent the power to reason defeasibily. The OSCAR system of defeasible reasoning is sketched. It is argued that if epistemic cognition must be defeasible, planning must also be done defeasibly, and the best way to do that is to reason defeasibly about plans. A sketch is given about how this might work.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Leopold Stubenberg (1992). What is It Like to Be Oscar? Synthese 90 (1):1-26.
John L. Pollock (1999). Rational Cognition in Oscar. Agent Theories.
Added to index2009-01-28
Total downloads29 ( #56,988 of 1,096,509 )
Recent downloads (6 months)2 ( #139,663 of 1,096,509 )
How can I increase my downloads?