David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Agent Theories (1999)
Stuart Russell  describes rational agents as --œthose that do the right thing--�. The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. There are two approaches to the latter problem, depending upon the kind of agent we want to build. On the one hand, anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Anthropomorphic agents can be contrasted with goal-oriented agents --” those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
John L. Pollock (2001). Evaluative Cognition. Noûs 35 (3):325–364.
Isaac Levi (2008). Why Rational Agents Should Not Be Liberal Maximizers. Canadian Journal of Philosophy 38 (S1):1-17.
Added to index2009-01-28
Total downloads9 ( #159,935 of 1,102,965 )
Recent downloads (6 months)2 ( #183,254 of 1,102,965 )
How can I increase my downloads?