In 1934 a most singular event occurred. Two papers were published on a topic that had (apparently) never before been written about, the authors had never been in contact with one another, and they had (apparently) no common intellectual background that would otherwise account for their mutual interest in this topic.1 These two papers formed the basis for a movement in logic which is by now the most common way of teaching elementary logic by far, and indeed is perhaps all (...) that is known in any detail about logic by a number of philosophers (especially in North America). This manner of proceeding in logic is called ‘natural deduction’. And in its own way the instigation of this style of logical proof is as important to the history of logic as the discovery of resolution by Robinson in 1965, or the discovery of the logistical method by Frege in 1879, or even the discovery of the syllogistic by Aristotle in the fourth century BC. (shrink)
(although the FOF, unlike the CNF, is still a theorem). The correct version of Problem 62 is (following the format of (Pelletier, 1986)): Natural FOF Negated Conclusion CNF (Ax)r(Pet~(Px m Pf(x))) m Pf(f(x))] Pet Px+ P f(f(x)) + -Pa..
I was asked to develop a course “Philosophy and Cognitive Science” to be taught for the first time in Spring 1995 in the Philosophy Department at the University of Alberta. Since my cognitive science-related interests are focussed more towards philosophy mixed with artificial intelligence (A I) and linguistics than towards (say) neuroscience or anthropology, I decided to slant the course in t hat direction. The departmental intent was that this should be an upper-level course, but with no spe cific prerequisite (...) courses. This meant that while there was a “three previous courses in philosophy ” prerequisite, the students could not be expected to have taken any particular course, as (say) a phil osophy of mind course or a logic course. Further, I had in mind that the course would be part of an initiative to create an undergraduate program in Cognitive Science at the University of Alberta, and s o I encouraged students from linguistics, computer science, and psychology who had taken courses in their department like “Language and Mind”, “Introduction to AI”, and “Introduction to Cognit ive Science” to take this course even without the philosophy background. This resulted in the class consisting of about a quarter each philosophy, psychology and computer science majors, with the re st being drawn from linguistics, education, and anthropology. And although this distribution of stud ents arguably lessened the philosophical sophistication of both the lectures and the class discussion, I (and the students, I believe) found the perspectives brought to the topic by these other students to be fascinating and quite provocative. (shrink)
We report empirical results on factors that influence how people reason with default rules of the form "Most x's have property P", in scenarios that specify information about exceptions to these rules and in scenarios that specify default-rule inheritance. These factors include (a) whether the individual, to which the default rule might apply, is similar to a known exception, when that similarity may explain why the exception did not follow the default, and (b) whether the problem involves classes of naturally (...) occurring kinds or classes of artifacts. We consider how these findings might be integrated into formal approaches to default reasoning and also consider the relation of this sort of qualitative default reasoning to statistical reasoning. (shrink)
Different researchers use "the philosophy of automated theorem p r o v i n g " t o cover d i f f e r e n t concepts, indeed, different levels of concepts. Some w o u l d count such issues as h o w to e f f i c i e n t l y i n d e x databases as part of the philosophy of automated theorem p r o v i n g . (...) Others wonder about whether f o r m u l a s should be represented as strings or as trees or as lists, and call this part of the philosophy of automated theorem p r o v i n g . Yet others concern themselves w i t h what k i n d o f search should b e embodied i n a n y automated theorem prover, or to what degree any automated theorem prover should resemble Prolog. Still others debate whether natural deduction or semantic tableaux or resolution is " b e t t e r " , a n d c a l l t h i s a part of the p h i l o s o p h y of automated theorem p r o v i n g . Some people wonder whether automated theorem p r o v i n g should be " h u m a n oriented" or "machine o r i e n t e d " — sometimes arguing about whether the internal p r o o f methods should be " h u m a n - I i k e " or not, sometimes arguing about whether the generated proof should be output in a f o r m u n d e r s t a n d a b l e by p e o p l e , and sometimes a r g u i n g a b o u t the d e s i r a b i l i t y o f h u m a n intervention in the process of constructing a proof. There are also those w h o ask such questions as whether we s h o u l d even be concerned w i t h completeness or w i t h soundness of a system, or perhaps we should instead look at very efficient (but i n c o m p l e t e ) subsystems or look at methods of generating models w h i c h might nevertheless validate invalid arguments. A n d a l l of these have been v i e w e d as issues in the philosophy of automated theorem proving. Here, I w o u l d l i k e to step back from such i m p l e m e n t - ation issues and ask: " W h a t do we really think we are doing when we w r i t e an automated theorem prover?" My reflections are perhaps idiosyncratic, but I do think that they put the different researchers* efforts into a broader perspective, and give us some k i n d of handle on w h i c h directions we ourselves m i g h t w i s h to pursue when constructing (or extending) an automated theorem proving system. A logic is defined to be (i) a vocabulary and formation rules ( w h i c h tells us w h a t strings of symbols are w e l l - formed formulas in the logic), and ( i i ) a definition of ' p r o o f in that system ( w h i c h tells us the conditions under which an arrangement of formulas in the system constitutes a proof). Historically speaking, definitions of ' p r o o f have been given in various different manners: the most c o m m o n have been H i l b e r t - s t y l e ( a x i o m a t i c ) , Gentzen-style (consecution, or sequent), F i t c h - s t y l e (natural deduction), and Beth-style (tableaux).. (shrink)
In this paper we report preliminary results on how people revise or update a previously held set of beliefs. When intelligent agents learn new things which conflict with their current belief set, they must revise their belief set. When the new information does not conflict, they merely must update their belief set. Various AI theories have been proposed to achieve these processes. There are two general dimensions along which these theories differ: whether they are syntactic-based or model-based, and what constitutes (...) a minimal change of beliefs. This study investigates how people update and revise semantically equivalent but syntactically distinct belief sets, both in symbolic-logic problems and in quasi-real-world problems. Results indicate that syntactic form affects belief revision choices. In addition, for the symbolic problems, subjects update and revise semantically-equivalent belief sets identically, whereas for the quasi-real-world problems they both update and revise differently. Further, contrary to earlier studies, subjects are sometimes reluctant to accept that a sentence changes from false to true, but they are willing to accept that it would change from true to false. (shrink)
A certain direction in cognitive science has been to try to “ground” public language statements in some species of mental representation. A central tenet of this trend is that communication – that is, public language – succeeds (when it does) because the elements of this public language are in some way correlated with mental items of both the speaker and the audience so that the mental state evoked in the audience by the use of that piece of public language is (...) the one that the speaker wanted to evoke. The “meaning”, therefore, of an utterance – and of the parts of an utterance, such as individual sentences and their parts, the individual words, etc. – is, in this view, some mental item. Successful communication requires that there be widespread agreement amongst speakers of the same public language as to the mental entities that are correlated with any particular public words. Such a view of meaning is variously called “internalist” or “cognitive” or “subjectivist” or “solipsistic” or (sometimes) “representationalist” (these terms having, however, further connotations which set them apart from one another in other ways), and can be found in a wide variety of writers who do not agree on many other things. It is opposed to views that take the meaning of an utterance to be an item of “reality,” however defined. In different writers this latter view is called “externalist” or “objectivist” or “realist” or (sometimes) “represent-ationalist,” always with the idea that there is something other (or at least, more) than the mental state of speakers and hearers that determines meaning. The literature is rife with arguments between internalists vs. externalists, subjectivists vs. objectivists, cognitivists vs. realists, on such topics as “truth” and “synonymy” and “twin earth” and “arthritis” (to mention only a few).. (shrink)
Vagueness: an expression is vague if and only if it is possible that it give rise to a “borderline case.” A borderline case is a situation in which the application of a particular expression to a (name of) a particular object does not generate an expression with a definite TRUTH-VALUE. That is, the piece of language in question neither applies to the object nor fails to apply. Although such a formulation leaves it open what the pieces of language might be (...) (whole sentences, individual words, NAMES or SINGULAR TERMS, PREDICATES or GENERAL TERMS), most discussions have focussed on vague general terms and have considered other types of terms to be non-vague. (Exceptions to this have called attention to the possibility of vague objects, thereby making the designation relation for singular terms be vague). The formulation also leaves open the possible causes for the expression not to have a definite truth value. If this indeterminacy is due to there being insufficient information available to determine applicability or non-applicability of the term (that is, we’re convinced the term either does or doesn’t apply, but we just don’t have enough information to determine which), then this is sometimes called “epistemic vagueness.” It is somewhat misleading to call this vagueness, for unlike true vagueness, this epistemic vagueness disappears if more information is brought into the situation. (‘There are 1.89∞106 stars in the sky’ epistemically vague but is not vague in the generally accepted sense of the term). ‘Vagueness’ may also be used to characterize non-linguistic items such as CONCEPTS, MEMORIES, and OBJECTS ... as well as such semi-linguistic items as STATEMENTS and PROPOSITIONS. Many of the issues involved in discussing the topic of vagueness impinge upon other philosophical topics, such as the existence of TRUTH-VALUE GAPS (declarative sentences which are neither TRUE nor FALSE) and the plausibility of MANY-VALUED LOGIC.. (shrink)
In this paper we explore a class of belief update operators, in which the definition of the operator is compositional with respect to the sentence to be added. The goal is to provide an update operator that is intuitive, in that its definition is based on a recursive decomposition of the update sentence’s structure, and that may be reasonably implemented. In addressing update, we first provide a definition phrased in terms of the models of a knowledge base. While this operator (...) satisfies a core group of the benchmark Katsuno Mendelzon update postulates, not all of the postulates are satisfied. Other Katsuno Mendelzon postulates can be obtained by suitably restricting the syntactic form of the sentence for update, as we show. In restricting the syntactic form of the sentence for update, we also obtain a hierarchy of update operators with Winslett’s standard semantics as the most basic interesting approach captured. We subsequently give an algorithm which captures this approach; in the general case the algorithm is exponential, but with some not unreasonable assumptions we obtain an algorithm that is linear in the size of the knowledge base. Hence the resulting approach has much better complexity characteristics than other operators in some situations. We also explore other compositional belief change operators: erasure is developed as a dual operator to update; we show that a forget operator is definable in terms of update; and we give a definition of the compositional revision operator. We obtain that compositional revision, under the most natural definition, yields the Satoh revision operator. (shrink)
Roger Gibson has achieved as much as anyone else, indeed, more, in presenting and defending Quine’s philosophy. It is no surprise that the great man W.V. Quine himself said that in reading Gibson he gained a welcome perspective on his own work. His twin books The Philosophy of W.V. Quine and Enlightened Empiricism have no rivals. We are all indebted to Roger. The essay that follows is intended not only to honor him but also to continue a theme that runs (...) throughout his (and Quine’s) work, namely, the seamless division between science and philosophy. The techniques we invoke are consonant with the naturalistic conception of language that central themes of Professor Gibson’s writings, namely, that language is “a social art to be studied empirically” (Enlightened Empiricism, p. 64). (shrink)
Larry Horn is justifiably famous for his work on the semantics of the English conjunction or and both its relationship to the formal logic truth functions ∨ and @ (“inclusive” and “exclusive” disjunction respectively1) and its relationship to the ways people employ or in natural discourse. These interests have been present since his 1972 dissertation, where he argued for a “scalar implicature-based” account of many of these relationships as opposed to a presuppositional account. They have surfaced in his “Greek Grice” (...) paper (Horn 1973) as well as in his Negation book (Horn 1989) and his recent “Border Wars” paper (Horn, forthcoming) where he defends the position that there are two types of implicatures at work here: Q- implicatures based on Grice’s first maxim of Quantity (“Say Enough”) and R-implicatures based on Grice’s second maxim of Quantity (“Don’t Say Too Much”). In a nutshell, the idea is that when a speaker employs a sentence with a disjunction, the meaning (that is, the semantic value) of the or is inclusive. With careful and judicious use of the Q- and R-implicatures, Larry’s theory allows the hearer (often) to infer that the speaker wanted to convey an exclusive disjunction. (shrink)
Many different kinds of items have been called vague, and so-called for a variety of different reasons. Traditional wisdom distinguishes three views of why one might apply the epitaph "vague" to an item; these views are distinguished by what they claim the vagueness is due to. One type of vagueness, The Good, locates vagueness in language, or in some representational system -- for example, it might say that certain predicates have a range of applicability. On one side of the range (...) are those cases to which the predicate clearly applies and on the other side of the range are those cases where the negation of the predicate clearly applies. But there is no sharp cutoff place along the range where the one range turns into the other. Most examples of The Good are those terms which describe some continuum -- such as bald describes a continuum of the ratio of hairs per cm2 on the head. But not all work this way. Alston (1968) points to terms like religion invoking a number of criteria the joint applicability of which ensures that the activity in question is a religion and the failure of all to apply ensures that it is not a religion. But when only some middling number of the criteria are fulfilled, the term religion neither applies nor fails to apply. Some accounts of "family resemblance" and "open texture" might also fit this picture. Such a view is often called a "representational account of vagueness". Another conception of vagueness, The Bad, locates vagueness as a property of discourses, of memories, and of certain philosophers and their papers, etc. This sort of vagueness occurs when the information available does not allow one to tell, for example, that a certain sentence is true, but also does not allow one to determine that it is false. It occurs when the information available does not allow one to claim that a predicate applies to a name, but also does not allow one to claim that the negation of the predicate applies to that name.. (shrink)
1: Linguistic and Epistemological Background 1 . 1 : Generic Reference vs. Generic Predication 1 . 2 : Why are there any Generic Sentences at all? 1 . 3 : Generics and Exceptions, Two Bad Attitudes 1 . 4 : Exceptions and Generics, Some Other Attitudes 1 . 5 : Generics and Intensionality 1 . 6 : Goals of an Analysis of Generic Sentences 1 . 7 : A Little Notation 1 . 8 : Generics vs. Explicit Statements of Regularities..
previous theories and the relevance of those criticisms to the new accounts. Additionally, we have included a new section at the end, which gives some directions to literature outside of formal semantics in which the notion of mass has been employed. We looked at work on mass expressions in psycholinguistics and computational linguistics here, and we discussed some research in the history of philosophy and in metaphysics that makes use of the notion of mass.
The Society tor Exact Philosop-hy was founded :in·l97D at a meeting held at McGill University in Montreal on 4-5 November at which was organised iby Mario Bunge. Funding for the meeting iwas provided by SDiii the International Union of Hsistory and Philosophy of cience (vson..
In an interesting experimental study, Bonini et al. (1999) present partial support for truth-gap theories of vagueness. We say this despite their claim to find theoretical and empirical reasons to dismiss gap theories and despite the fact that they favor an alternative, epistemic account, which they call ‘vagueness as ignorance’. We present yet more experimental evidence that supports gap theories, and argue for a semantic/pragmatic alternative that unifies the gappy supervaluationary approach together with its glutty relative, the subvaluationary approach.
A generic statement is a type of generalization that is made by asserting that a "kind" has a certain property. For example we might hear that marshmallows are sweet. Here, we are talking about the "kind" marshmallow and assert that individual instances of this kind have the property of being sweet. Almost all of our common sense knowledge about the everyday world is put in terms of generic statements. What can make these generic sentences be true even when there are (...) exceptions? A mass term is one that does not "divide its reference;" the word water is a mass term; the word dog is a count term. In a certain vicinity, one can count and identity how many dogs there are, but it doesn't make sense to do that for water--there just is water present. The philosophical literature is rife with examples concerning how a thing can be composed of a mass, such as a statue being composed of clay. Both generic statements and mass terms have led philosophers, linguists, semanticists, and logicians to search for theories to accommodate these phenomena and relationships. The contributors to this interdisciplinary volume study the nature and use of generics and mass terms. Noted researchers in the psychology of language use material from the investigation of human performance and child-language learning to broaden the range of options open for formal semanticists in the construction of their theories, and to give credence to some of their earlier postulations--for instance, concerning different types of predications that are available for true generics and for the role of object recognitions in the development of count vs. mass terms. Relevant data also is described by investigating the ways children learn these sorts of linguistic items: children can learn how to sue generic statements correctly at an early age, and children are adept at individuating objects and distinguishing them from the stuff of which they are made also at an early age. (shrink)
In ‘On Denoting’ and to some extent in ‘Review of Meinong and Others, Untersuchungen zur Gegenstandstheorie und Psychologie’, published in the same issue of Mind (Russell, 1905a,b), Russell presents not only his famous elimination (or contextual deﬁ nition) of deﬁ nite descriptions, but also a series of considerations against understanding deﬁ nite descriptions as singular terms. At the end of ‘On Denoting’, Russell believes he has shown that all the theories that do treat deﬁ nite descriptions as singular terms fall (...) logically short: Meinong’s, Mally’s, his own earlier (1903) theory, and Frege’s. (He also believes that at least some of them fall short on other grounds—epistemological and metaphysical—but we do not discuss these criticisms except in passing). Our aim in the present paper is to discuss whether his criticisms actually refute Frege’s theory. We ﬁ rst attempt to specify just what Frege’s theory is and present the evidence that has moved scholars to attribute one of three different theories to Frege in this area. We think that each of these theories has some claim to be Fregean, even though they are logically quite different from each other. This raises the issue of determining Frege’s attitude towards these three theories. We consider whether he changed his mind and came to replace one theory with another, or whether he perhaps thought that the different theories applied to different realms, for example, to natural language versus a language for formal logic and arithmetic. We do not come to any hard and fast conclusion here, but instead just note that all these theories treat deﬁ nite descriptions as singular terms, and that Russell proceeds as if he has refuted them all. After taking a brief look at the formal properties of the Fregean theories (particularly the logical status of various sentences containing nonproper deﬁ - nite descriptions) and comparing them to Russell’s theory in this regard, we turn to Russell’s actual criticisms in the above-mentioned articles to examine the extent to which the criticisms hold.. (shrink)
Psychologism in logic is the doctrine that the semantic content of logical terms is in some way a feature of human psychology. We consider the historically influential version of the doctrine, Psychological Individualism, and the many counter-arguments to it. We then propose and assess various modifications to the doctrine that might allow it to avoid the classical objections. We call these Psychological Descriptivism, Teleological Cognitive Architecture, and Ideal Cognizers. These characterizations give some order to the wide range of modern views (...) that are seen as psychologistic because of one or another feature. Although these can avoid some of the classic objections to psychologism, some still hold. (shrink)
In an earlier paper entitled Synonymous Logics, the authors attempted to show that there are two modal logics so that each is exactly translatable into the other, but they are not translationally equivalent. Unfortunately, there is an error in the proof of this result. The present paper provides a new example of two such logics, and a proof of the result claimed in the earlier paper.
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in (...) which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. (shrink)
Some utterances of sentences such as ‘Every student failed the midterm exam’ and ‘There is no beer’ are widely held to be true in a conversation despite the facts that not every student in the world failed the midterm exam and that there is, in fact, some beer somewhere. For instance, the speaker might be talking about some particular course, or about his refrigerator. Stanley and Szabó (in Mind and Language v. 15, 2000) consider many different approaches to how contextual (...) information might give meaning to these ‘restricted quantifier domains’, and find all of them but one wanting. The present paper argues that their considerations against one of these other theories, considerations that turn on notions of compositionality, are incorrect. (shrink)
This paper discusses the general problem of translation functions between logics, given in axiomatic form, and in particular, the problem of determining when two such logics are "synonymous" or "translationally equivalent." We discuss a proposed formal definition of translational equivalence, show why it is reasonable, and also discuss its relation to earlier definitions in the literature. We also give a simple criterion for showing that two modal logics are not translationally equivalent, and apply this to well-known examples. Some philosophical morals (...) are drawn concerning the possibility of having two logical systems that are "empirically distinct" but are both translationally equivalent to a common logic. (shrink)
In this essay I will consider two theses that are associated with Frege,and will investigate the extent to which Frege really believed them.Much of what I have to say will come as no surprise to scholars of thehistorical Frege. But Frege is not only a historical figure; he alsooccupies a site on the philosophical landscape that has allowed hisdoctrines to seep into the subconscious water table. And scholars in a widevariety of different scholarly establishments then sip from thesedoctrines. I believe (...) that some Frege-interested philosophers at various ofthese establishments might find my conclusions surprising.Some of these philosophical establishments have arisen from an educationalmilieu in which Frege is associated with some specific doctrine at theexpense of not even being aware of other milieux where other specificdoctrines are given sole prominence. The two theses which I will discussillustrate this point. Each of them is called Frege''s Principle, but byphilosophers from different milieux. By calling them milieux I do not want to convey the idea that they are each located at some specificsocio-politico-geographico-temporal location. Rather, it is a matter oftheir each being located at different places on the intellectuallandscape. For this reason one might (and I sometimes will) call them(interpretative) traditions. (shrink)
The central concern of Knowledge in a Social World is to restore the notion of Truth to the rightful place of glory that it had before the onslaught of those pragmatic, cultural-studying, social constructing, critical legalistic and feministic postmodernists (PoMo’s, for short). As G sees it, these PoMo’s have never put forward any “real” arguments for their veriphobia; and, well, how could they, since their position is committed to the “denial of Truth” and hence committed to denying that there is (...) any such thing as a valid or a sound argument. Instead, according to G, they have artfully inculcated various types of considerations so that a defender of Truth now must swim against the entire flow of academe to even stand a chance of making his or her voice heard. G finds six overlapping currents in this wash of anti-veritas ideas, listing them in his §1.3 and devoting a further section to each of them in turn (§1.4–§1.9). These currents spring up in various later parts of the book for further discussion, but even in these places G refers the reader back to the material in Chapter 1 as refuting them. (Insofar as a current that is not based on any consideration of Truth, validity, or soundness is susceptible of refutation.) I think it is fair to say that G does not really intend that his comments about the anti-veritas ideas will be taken as definitive by the veriphobes, but rather that he believes the strength of his positive account in the rest of the book (which accords Truth a central role) will sway all neutral readers into a belief of the usefulness of Truth in an account of social knowledge. Still, I think many readers…even veriphiles…will find the considerations that G brings against the six currents to be disappointingly short. I want to consider some of the currents in the stream of PoMo ideas. As I said, G’s six currents overlap at various points, and each of the currents has sub-flows. This means that we might put.... (shrink)
In (1991), Meinwald initiated a major change of direction in the study of Plato’s Parmenides and the Third Man Argument. On her conception of the Parmenides , Plato’s language systematically distinguishes two types or kinds of predication, namely, predications of the kind ‘x is F pros ta alla’ and ‘x is F pros heauto’. Intuitively speaking, the former is the common, everyday variety of predication, which holds when x is any object (perceptible object or Form) and F is a property (...) which x exempliﬁes or instantiates in the traditional sense. The latter is a special mode of predication which holds when x is a Form and F is a property which is, in some sense, part of the nature of that Form. Meinwald (1991, p. 75, footnote 18) traces the discovery of this distinction in Plato’s work to Frede (1967), who marks the distinction between pros allo and kath’ hauto predications by placing subscripts on the copula ‘is’. (shrink)
Philosophy of linguistics is the philosophy of science as applied to linguistics. This differentiates it sharply from the philosophy of language, traditionally concerned with matters of meaning and reference.
Natural deduction is the type of logic most familiar to current philosophers, and indeed is all that many modern philosophers know about logic. Yet natural deduction is a fairly recent innovation in logic, dating from Gentzen and Ja?kowski in 1934. This article traces the development of natural deduction from the view that these founders embraced to the widespread acceptance of the method in the 1960s. I focus especially on the different choices made by writers of elementary textbooks?the standard conduits of (...) the method to a generation of philosophers?with an eye to determining what the ?essential characteristics? of natural deduction are. (shrink)
We investigate the notion of relevance as it pertains to ‘commonsense’, subjunctive conditionals. Relevance is taken here as a relation between a property (such as having a broken wing) and a conditional (such as birds typically fly). Specifically, we explore a notion of ‘causative’ relevance, distinct from ‘evidential’ relevance found, for example, in probabilistic approaches. A series of postulates characterising a minimal, parsimonious concept of relevance is developed. Along the way we argue that no purely logical account of relevance (even (...) at the metalevel) is possible. Finally, and with minimal restrictions, an explicit definition that agrees with the postulates is given. (shrink)
Although resolution-based inference is perhaps the industry standard in automated theorem proving, there have always been systems that employed a different format. For example, the Logic Theorist of 1957 produced proofs by using an axiomatic system, and the proofs it generated would be considered legitimate axiomatic proofs; Wang’s systems of the late 1950’s employed a Gentzen-sequent proof strategy; Beth’s systems written about the same time employed his semantic tableaux method; and Prawitz’s systems of again about the same time are often (...) said to employ a natural deduction format. [See Newell, et al (1957), Beth (1958), Wang (1960), and Prawitz et al (1960)]. Like sequent proof systems and tableaux proof systems, natural deduction systems retain.. (shrink)
The Principle of Semantic Compositionality (sometimes called Frege''s Principle) is the principle that the meaning of a (syntactically complex) whole is a function only of the meanings of its (syntactic) parts together with the manner in which these parts were combined. This principle has been extremely influential throughout the history of formal semantics; it has had a tremendous impact upon modern linguistics ever since Montague Grammars became known; and it has more recently shown up as a guiding principle for a (...) certain direction in cognitive science.Despite the fact that The Principle is vague or underspecified at a number of points — such as what meaning is, what counts as a part, what counts as a syntactic complex, what counts as combination — this has not stopped some people from viewing The Principle as obviously true, true almost by definition. And it has not stopped other people from viewing The Principle as false, almost pernicious in its effect. And some of these latter theorists think that it is an empirically false principle while others think of it as a methodologically wrong-headed way to proceed. (shrink)
THINKER is an automated natural deduction first-order theorem proving program. This paper reports on how it was adapted so as to prove theorems in modal logic. The method employed is an indirect semantic method, obtained by considering the semantic conditions involved in being a valid argument in these modal logics. The method is extended from propositional modal logic to predicate modal logic, and issues concerning the domain of quantification and existence in a world's domain are discussed. Finally, we look at (...) the very interesting issues involved with adding identity to the theorem prover in the realm of modal predicate logic. Various alternatives are discussed. (shrink)