David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
The “grand problem” of AI has always been to build artificial agents with human-like intelligence. That is the stuff of science fiction, but it is also the ultimate aspiration of AI. In retrospect, we can understand what a difficult problem this is, so since its inception AI has focused more on small manageable problems, with the hope that progress there will have useful implications for the grand problem. Now there is a resurgence of interest in tackling the grand problem head-on. Perhaps AI has made enough progress on the little problems that we can fruitfully address the big problem. The objective is to build agents of human-level intelligence capable of operating in environments of real-world complexity. I will refer to these as GIAs — “generally intelligent agents”. OSCAR is a cognitive architecture for GIAs, implemented in LISP.1 OSCAR draws heavily on my work in philosophy concerning both epistemology (Pollock 1974, 1986, 1990, 1995, 1998, 2008b, 2008; Pollock and Cruz 1999; Pollock and Oved, 2005) and rational decision making (2005, 2006, 2006a).
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
|Through your library||Only published papers are available at libraries|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Emad Abdel Rahim Dahiyat (2010). Intelligent Agents and Liability: Is It a Doctrinal Problem or Merely a Problem of Explanation? [REVIEW] Artificial Intelligence and Law 18 (1):103-121.
Leopold Stubenberg (1992). What is It Like to Be Oscar? Synthese 90 (1):1-26.
John L. Pollock (1999). Rational Cognition in Oscar. Agent Theories.
Added to index2009-01-28
Total downloads10 ( #120,414 of 1,089,047 )
Recent downloads (6 months)1 ( #69,722 of 1,089,047 )
How can I increase my downloads?