David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
In Robert E. Cummins & John L. Pollock (eds.), Philosophy and AI. Cambridge: MIT Press 215--259 (1991)
Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system.
|Keywords||Artificial Intelligence Knowledge Minds Model|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
William J. Rapaport (2006). How Helen Keller Used Syntactic Semantics to Escape From a Chinese Room. Minds and Machines 16 (4):381-436.
Similar books and articles
William J. Rapaport (1991). Predication, Fiction, and Artificial Intelligence. Topoi 10 (1):79-111.
Renata Wassermann (1999). Resource Bounded Belief Revision. Erkenntnis 50 (2-3):429-446.
M. H. Lee & N. J. Lacey (2003). The Influence of Epistemology on the Design of Artificial Agents. Minds and Machines 13 (3):367-395.
Igor Douven & Alexander Riegler (2009). Extending the Hegselmann–Krause Model III: From Single Beliefs to Complex Belief States. Episteme 6 (2):145-163.
Ronald Giere (2010). An Agent-Based Conception of Models and Scientific Representation. Synthese 172 (2):269–281.
Nicola Lacey & M. Lee (2003). The Epistemological Foundations of Artificial Agents. Minds and Machines 13 (3):339-365.
Stuart C. Shapiro & William J. Rapaport (1992). The SNePS Family. Computers and Mathematics with Applications 23:243-275.
Hans Van Ditmarsch & Willem Labuschagne (2007). My Beliefs About Your Beliefs: A Case Study in Theory of Mind and Epistemic Logic. Synthese 155 (2):191 - 209.
Hans van Ditmarsch & Willem Labuschagne (2007). My Beliefs About Your Beliefs: A Case Study in Theory of Mind and Epistemic Logic. Synthese 155 (2):191-209.
Robert F. Hadley (1991). A Sense-Based, Process Model of Belief. Minds and Machines 1 (3):279-320.
Added to index2009-01-28
Total downloads260 ( #2,768 of 1,699,588 )
Recent downloads (6 months)46 ( #9,813 of 1,699,588 )
How can I increase my downloads?