This essay re-examines Meinong's "Über Gegenstandstheorie" and undertakes a clarification and revision of it that is faithful to Meinong, overcomes the various objections to his theory, and is capable of offering solutions to various problems in philosophy of mind and philosophy of language. I then turn to a discussion of a historically and technically interesting Russell-style paradox (now known as "Clark's Paradox") that arises in the modified theory. I also examine the alternative Meinong-inspired theories of Hector-Neri Castañeda and Terence Parsons.
This essay considers what it means to understand natural language and whether a computer running an artificial-intelligence program designed to understand natural language does in fact do so. It is argued that a certain kind of semantics is needed to understand natural language, that this kind of semantics is mere symbol manipulation (i.e., syntax), and that, hence, it is available to AI systems. Recent arguments by Searle and Dretske to the effect that computers cannot understand natural language are discussed, and (...) a prototype natural-language-understanding system is presented as an illustration. (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and computers are (...) semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments.
John Searle once said: "The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." I say: "Yes". Stuart C. Shapiro has said: "Does that make any sense? Yes: Everything (...) makes sense. The question is: What sense does it make?" This essay explores what sense it makes to say that syntax by itself is sufficient for semantics. (shrink)
This essay continues my investigation of `syntactic semantics': the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a `narrow', conceptual-role semantics is the appropriate sort of semantics (...) to account (from an `internal', or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or `wide', semantics, while others have argued for a two-factor analysis. But, although two factors can be specifiedâ-one internal and first-person, the other only specifiable in an external, third-person wayâ-only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective. (shrink)
Turner argues that computer programs must have purposes, that implementation is not a kind of semantics, and that computers might need to understand what they do. I respectfully disagree: Computer programs need not have purposes, implementation is a kind of semantic interpretation, and neither human computers nor computing machines need to understand what they do.
Review of Joseph Y. Halpern (ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference (Los Altos, CA: Morgan Kaufmann, 1986),.
Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs (...) as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system. (shrink)
SNePS, the Semantic Network Processing System 45, 54], has been designed to be a system for representing the beliefs of a natural-language-using intelligent system (a \cognitive agent"). It has always been the intention that a SNePS-based \knowledge base" would ultimatelybe built, not by a programmeror knowledge engineer entering representations of knowledge in some formallanguage or data entry system, but by a human informing it using a natural language (NL) (generally supposed to be English), or by the system reading books or (...) articles that had been prepared for human readers. Because of this motivation, the criteria for the development of SNePS have included: it should be able to represent anything and everything expressible in NL; it should be able to represent generic, as well as speci c information; it should be able to use the generic and the speci c information to reason and infer information implied by what it has been told; it cannot count on any particular order among the pieces of information it is given; it must continue to act reasonably even if the information it is given includes circular de nitions, recursive rules, and inconsistent information. (shrink)
A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, (...) the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming. (shrink)
A critical survey of some attempts to define ‘computer’, beginning with some informal ones, then critically evaluating those of three philosophers, and concluding with an examination of whether the brain and the universe are computers.
A critique of several recent objections to John Searle's Chinese-Room Argument against the possibility of "strong AI" is presented. The objections are found to miss the point, and a stronger argument against Searle is presented, based on a distinction between "syntactic" and "semantic" understanding.
The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, namely, that Turing-Test± passing artifacts are persons, it is argued that, whether or not this objection holds, such artifacts will inevitably (...) be persons. (shrink)
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax â a study of relations among symbols (including meanings) â and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling (...) it) in terms of another, can be viewed recursively: The base case of semantic understanding âunderstanding a domain in terms of itself â is syntactic understanding. (3) An internal (or narrow ), first-person point of view makes an external (or wide ), third-person point of view otiose for purposes of understanding cognition. (shrink)
“Contextual” vocabulary acquisition is the active, deliberate acquisition of a meaning for a word in a text by reasoning from textual clues and prior knowledge, including language knowledge and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people. But what is “context”? Is it just the surrounding text? Does it include the reader’s background knowledge? I argue that the appropriate context for contextual vocabulary acquisition is the reader’s “internalization” of the (...) text “integrated” into the reader’s “prior” knowledge via belief revision. (shrink)
Philosophy has been characterized (e.g., by Benson Mates) as a field whose problems are unsolvable. This has often been taken to mean that there can be no progress in philosophy as there is in mathematics or science. The nature of problems and solutions is considered, and it is argued that solutions are always parts of theories, hence that acceptance of a solution requires commitment to a theory (as suggested by William Perry's scheme of cognitive development). Progress can be had in (...) philosophy in the same way as in mathematics and science by knowing what commitments are needed for solutions. Similar views of Rescher and Castañeda are discussed. (shrink)
This essay examines the role of non-existent objects in "epistemological ontology" — the study of the entities that make thinking possible. An earlier revision of Meinong's Theory of Objects is reviewed, Meinong's notions of Quasisein and Außersein are discussed, and a theory of Meinongian objects as "combinatorially possible" entities is presented.
We present a computational analysis of de re, de dicto, and de se belief and knowledge reports. Our analysis solves a problem first observed by Hector-Neri Castañeda, namely, that the simple rule -/- `(A knows that P) implies P' -/- apparently does not hold if P contains a quasi-indexical. We present a single rule, in the context of a knowledge-representation and reasoning system, that holds for all P, including those containing quasi-indexicals. In so doing, we explore the difference between reasoning (...) in a public communication language and in a knowledge-representation language, we demonstrate the importance of representing proper names explicitly, and we provide support for the necessity of considering sentences in the context of extended discourse (for example, written narrative) in order to fully capture certain features of their semantics. (shrink)
Ford’s Helen Keller Was Never in a Chinese Room claims that my argument in How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room fails because Searle and I use the terms ‘syntax’ and ‘semantics’ differently, hence are at cross purposes. Ford has misunderstood me; this reply clarifies my theory.
Syntactic semantics is a holistic, conceptual-role-semantic theory of how computers can think. But Fodor and Lepore have mounted a sustained attack on holistic semantic theories. However, their major problem with holism (that, if holism is true, then no two people can understand each other) can be fixed by means of negotiating meanings. Syntactic semantics and Fodor and Leporeâs objections to holism are outlined; the nature of communication, miscommunication, and negotiation is discussed; Brunerâs ideas about the negotiation of meaning are explored; (...) and some observations on a problem for knowledge representation in AI raised by Winston are presented. (shrink)
Deliberate contextual vocabulary acquisition (CVA) is a reader’s ability to figure out a (not the) meaning for an unknown word from its “context”, without external sources of help such as dictionaries or people. The appropriate context for such CVA is the “belief-revised integration” of the reader’s prior knowledge with the reader’s “internalization” of the text. We discuss unwarranted assumptions behind some classic objections to CVA, and present and defend a computational theory of CVA that we have adapted to a new (...) classroom curriculum designed to help students use CVA to improve their reading comprehension. (shrink)
This project continues our interdisciplinary research into computational and cognitive aspects of narrative comprehension. Our ultimate goal is the development of a computational theory of how humans understand narrative texts. The theory will be informed by joint research from the viewpoints of linguistics, cognitive psychology, the study of language acquisition, literary theory, geography, philosophy, and artificial intelligence. The linguists, literary theorists, and geographers in our group are developing theories of narrative language and spatial understanding that are being tested by the (...) cognitive psychologists and language researchers in our group, and a computational model of a reader of narrative text is being developed by the AI researchers, based in part on these theories and results and in part on research on knowledge representation and reasoning. This proposal describes the knowledge-representation and natural-language-processing issues involved in the computational implementation of the theory; discusses a contrast between communicative and narrative uses of language and of the relation of the narrative text to the story world it describes; investigates linguistic, literary, and hermeneutic dimensions of our research; presents a computational investigation of subjective sentences and reference in narrative; studies children’s acquisition of the ability to take third-person perspective in their own storytelling; describes the psychological validation of various linguistic devices; and examines how readers develop an understanding of the geographical space of a story. This report is a longer version of a project description submitted to NSF. This document, produced in May 2007, is a L ATEX version of Technical Report 89-07 (Buffalo: SUNY Buffalo Department of Computer Science, August 1989), with slightly.. (shrink)
This essay examines the role of non-existent objects in "epistemological ontology" — the study of the entities that make thinking possible. An earlier revision of Meinong's Theory of Objects is reviewed, Meinong's notions of Quasisein and Außersein are discussed, and a theory of Meinongian objects as "combinatorially possible" entities is presented.
This paper describes the SNePS knowledge-representation and reasoning system. SNePS is an intensional, propositional, semantic-network processing system used for research in AI. We look at how predication is represented in such a system when it is used for cognitive modeling and natural-language understanding and generation. In particular, we discuss issues in the representation of fictional entities and the representation of propositions from fiction, using SNePS. We briefly survey four philosophical ontological theories of fiction and sketch an epistemological theory of fiction (...) using a story operator and rules for allowing propositions to migrate into and out of story spaces. (shrink)
Contextual vocabulary acquisition (CVA) is the deliberate acquisition of a meaning for a word in a text by reasoning from context, where “context” includes: (1) the reader’s “internalization” of the surrounding text, i.e., the reader’s “mental model” of the word’s “textual context” (hereafter, “co-text” [3]) integrated with (2) the reader’s prior knowledge (PK), but it excludes (3) external sources such as dictionaries or people. CVA is what you do when you come across an unfamiliar word in your reading, realize that (...) you don’t know what it means, decide that you need to know what it means in order to understand the passage, but there is no one around to ask, and it is not in the dictionary (or you are too lazy to look it up). In such a case, you can try to figure out its meaning “from context”, i.e., from clues in the co-text together with your prior knowledge. Our computational theory of CVA—implemented in a the SNePS knowledge representation and reasoning system [28]—begins with a stored knowledge base containing SNePS representations of relevant PK, inputs SNePS representations of a passage containing an unfamiliar word, and draws inferences from these two (integrated) information sources. When asked to define the word, definition algorithms deductively search the resulting network for information of the sort that might be found in a dictionary definition, outputting a definition frame whose slots are the kinds of features that a definition might contain (e.g., class membership, properties, actions, spatio-temporal information, etc.) and whose slot-fillers contain information gleaned from the network [6–8,20,23,24]. We are investigating ways to make our system more robust, to embed it in a naturallanguage-processing system, and to incorporate morphological information. Our research group, including reading educators, is also applying our methods to the develop-. (shrink)
We discuss a research project that develops and applies algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We try to unify a disparate literature on the topic of CVA from psychology, first- and secondlanguage acquisition, and reading science, in order to help develop these algorithms: We use the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students’ abilities to use CVA strategies in their reading of science texts (...) at the middle-school and college undergraduate levels. The knowledge gained from case studies of students using our CVA techniques feeds back into further development of our computational theory. Keywords: artificial intelligence, knowledge representation, reading, reasoning, science education, vocabulary acquisition. (shrink)
A fundamental assumption of Alexius Meinong's 1904 Theory of Objects is the act-content-object analysis of psychological experiences. I suggest that Meinong's theory need not be based on this analysis, but that an adverbial theory might suffice. I then defend the adverbial alternative against an objection raised by Roderick Chisholm, and conclude by presenting an apparently more serious objection based on a paradox discovered by Romane Clark.
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
A brief introduction to Meinong, his theory of objects, and modern interpretations of it. Sections include: The Theory of Objects, Castañeda's Theory of Guises, Parsons,'s Theory of Nonexistent Objects, Rapaport's Theory of Meinongian Objects, Routley's Theory of Items.
H´ector-Neri Casta˜neda-Calder´on (December 13, 1924–September 7, 1991) was born in San Vicente Zacapa, Guatemala. He attended the Normal School for Boys in Guatemala City, later called the Military Normal School for Boys, from which he was expelled for refusing to fight a bully; the dramatic story, worthy of being filmed, is told in the “De Re” section of his autobiography, “Self-Profile” (1986). He then attended a normal school in Costa Rica, followed by studies in philosophy at the University of San (...) Carlos, Guatemala. He won a scholarship to the University of Minnesota, where he received his B.A. (1950), M.A. (1952), and Ph.D. (1954), all in philosophy. His dissertation, “The Logical Structure of Moral Reasoning”, was written under the direction of Wilfrid Sellars. He returned to teach in Guatemala, and then received a scholarship to study at Oxford University (1955–1956), after which he took a sabbatical-replacement position in philosophy at Duke University (1956). His first full-time academic appointment was at Wayne State University (1957– 1969), where he founded the philosophy journal Noˆus (1967, a counter-offer made to him by Wayne State to encourage him to stay there rather than to take the chairmanship of philosophy at the University of Pennsylvania). In 1969, he moved (along with several of his Wayne colleagues) to Indiana University, where he eventually became the Mahlon Powell Professor of Philosophy and, later, its first Dean of Latino Affairs (1978–1981). He remained at Indiana until his death. He was also a visiting professor of philosophy at the University of Texas at Austin (1962–1963) and a fellow at the Center for Advanced Study in the Behavioral Sciences (1981–1982). He received grants and fellowships from the Guggenheim Foundation (1967–1968), the T. Andrew Mellon Foundation, the National Endowment for the Humanities, and the National Science Foundation. He was elected President of the American Philosophical Association Central Division (1979– 1980), named to the American Academy of Arts and Sciences (1990), and received the Presidential Medal of Honor from the Government of Guatemala (1991). Casta˜neda’s philosophical interests spanned virtually the entire spectrum of philosophy, and his theories form a highly interconnected whole.. (shrink)
This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented.
This essay presents and defends a triage theory of grading: An item to be graded should get full credit if and only if it is clearly or substantially correct, minimal credit if and only if it is clearly or substantially incorrect, and partial credit if and only if it is neither of the above; no other (intermediate) grades should be given. Details on how to implement this are provided, and further issues in the philosophy of grading (reasons for and against (...) grading, grading on a curve, and the subjectivity of grading) are discussed. (shrink)
This essay presents a philosophical and computationol theory of the representation of de re, de dlcto, nested, and quasi-indexical belief reports expressed in natural language. The propositional Semantic Network Processing System (SNePS) is used for representing and reasoning about these reports. In particular, quasi-indicators (indexical expressions occurring in intentional contexts and representing uses of indicators by another speaker) pose problems for natural language representation and reasoning systems, because--unlike pure indicators --they cannot be replaced by coreferential NPs without changing the meaning (...) of the embedding sentence. Therefore, the referent of the quasi-indicator must be represented in such a way that no invalid coreferential claims are entailed. The importance of quasi-indicators is discussed, and it is shown that all four of the above categories of belief reports can be handled by a single representational technique using belief spaces containing intensional entities. Inference rules and belief-revision techniques for the system are also examined. (shrink)
Hauser argues that his pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion - Cal thinks - most would deny. He considers several ways to avoid this conclusion, and finds them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or (...) the standards- e.g., autonomy and self-consciousness- make it impossible to verify whether anything or anyone (save oneself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than is generally appreciated. Rapaport's comments suggest that, on a strong view of thinking, mere calculating is not thinking (and pocket calculators don't think), but on a weak, but unexciting, sense of thinking, pocket calculators do think. He closes with some observations on the implications of this conclusion. (shrink)
This is a draft of the written version of comments on a paper by David Cole, presented orally at the American Philosophical Association Central Division meeting in New Orleans, 27 April 1990. Following the written comments are 2 appendices: One contains a letter to Cole updating these comments. The other is the handout from the oral presentation.
I argue that George Nakhnikian's analysis of the logic of cogito propositions (roughly, Descartes's 'cogito' and 'sum') is incomplete. The incompleteness is rectified by showing that disjunctions of cogito propositions with contingent, non-cogito propositions satisfy conditions of incorrigibility, self-certifyingness, and pragmatic consistency; hence, they belong to the class of propositions with whose help a complete characterization of cogito propositions is made possible.
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters'.
Alexius Meinong developed a notion of defective objects in order to account for various logical and psychological paradoxes. The notion is of historical interest, since it presages recent work on the logical paradoxes by Herzberger and Kripke. But it fails to do the job it was designed for. However, a technique implicit in Meinong's investigation is more successful and can be adapted to resolve a similar paradox discovered by Romane Clark in a revised version of Meinong's Theory of Objects due (...) to Rapaport. One family of paradoxes remains, but it is argued that they are unavoidable and relatively harmless. (shrink)
Alexius Meinong developed a notion of defective objects in order to account for various logical and psychological paradoxes. The notion is of historical interest, since it presages recent work on the logical paradoxes by Herzberger and Kripke. But it fails to do the job it was designed for. However, a technique implicit in Meinong's investigation is more successful and can be adapted to resolve a similar paradox discovered by Romane Clark in a revised version of Meinong's Theory of Objects due (...) to Rapaport. One family of paradoxes remains, but it is argued that they are unavoidable and relatively harmless. (shrink)
Everyone has a different "learning style". (A good introduction to the topic of learning styles is Claxton & Murrell 1987. For more on different learning styles, see Keirsey Temperament and Character Web Site, William Perry's Scheme of Intellectual and Ethical Development, Holland 1966, Kolb 1984, Sternberg 1999. For an interesting discussion of some limitations of learning styles from the perspective of teaching styles, see Glenn 2009/2010.) For some online tools targeted at different learning styles, see "100 Helpful Web Tools for (...) Every Kind of Learner". (shrink)
Terence Parsons's informal theory of intentional objects, their properties, and modes of predication does not adequately reflect ordinary ways of speaking and thinking. Meinongian theories recognizing two modes of predication are defended against Parsons's theory of two kinds of properties. Against Parsons's theory of fictional objects, I argue that no existing entities appear in works of fiction. A formal version of Parsons's theory is presented, and a curious consequence about modes of predication is indicated.