David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Agent Theories (1999)
Stuart Russell  describes rational agents as --œthose that do the right thing--�. The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. There are two approaches to the latter problem, depending upon the kind of agent we want to build. On the one hand, anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Anthropomorphic agents can be contrasted with goal-oriented agents --” those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Matt Williams & Jon Williamson (2006). Combining Argumentation and Bayesian Nets for Breast Cancer Prognosis. Journal of Logic, Language and Information 15 (1-2):155-178.
Similar books and articles
John L. Pollock (2001). Evaluative Cognition. Noûs 35 (3):325–364.
Isaac Levi (2008). Why Rational Agents Should Not Be Liberal Maximizers. Canadian Journal of Philosophy 38 (S1):1-17.
Added to index2009-01-28
Total downloads23 ( #164,484 of 1,902,202 )
Recent downloads (6 months)8 ( #96,938 of 1,902,202 )
How can I increase my downloads?