David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
A user is a system capable of creating and pursuing individual goals. Is it possible to design and implement an artificial user? Traditional artificial systems focus on how achieving a given goal. Most learning algorithms look for an optimal solution of a problem, given a set of optimization criteria and a goal (or a set of goals). However, real agents and real users have to develop new goals in order to cope with their environment. They must find “what” they want to achieve and not only “how”. The development of completely new goals on the basis of the interaction with the environment is here defined the “what” problem. In this paper we will try to define it and we will propose an architecture capable of addressing it. Such an architecture is proposed as the foundation for an artificial user.
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Colin Allen, Iva Smit & Wendell Wallach (2005). Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW] Ethics and Information Technology 7 (3):149-155.
Peter Lanz & David Mcfarland (1995). On Representation, Goals and Cognition. International Studies in the Philosophy of Science 9 (2):121 – 133.
Pavel Prudkov (2010). A View on Human Goal-Directed Activity and the Construction of Artificial Intelligence. Minds and Machines 20 (3):363-383.
Mark Bedau, To Appear in Luciano Floridi, Ed., Blackwell Guide to the Philosophy of Computing and Information.
Z. Chen (1994). Knowledge Discovery and System-User Partnership: On a Production “Adversarial Partnership” Approach. [REVIEW] AI and Society 8 (4):341-356.
Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf (2008). The Ethics of Designing Artificial Agents. Ethics and Information Technology 10 (2-3):115-121.
S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty (forthcoming). The Ethics of Designing Artificial Agents. Ethics and Information Technology.
Dawn Jutla (2010). Layering Privacy on Operating Systems, Social Networks, and Other Platforms by Design. Identity in the Information Society 3 (2):319-341.
Mariarosaria Taddeo & Luciano Floridi (2008). A Praxical Solution of the Symbol Grounding Problem. Minds and Machines 17 (4):369-389.
Joan E. Sieber (2005). Misconceptions and Realities About Teaching Online. Science and Engineering Ethics 11 (3):329-340.
Robert W. Lurz & Carla Krachun (2011). How Could We Know Whether Nonhuman Primates Understand Others' Internal Goals and Intentions? Solving Povinelli's Problem. Review of Philosophy and Psychology 2 (3):449-481.
Added to index2010-12-22
Total downloads20 ( #177,590 of 1,790,294 )
Recent downloads (6 months)6 ( #141,142 of 1,790,294 )
How can I increase my downloads?