In an interesting experimental study, Bonini et al. (1999) present partial support for truth-gap theories of vagueness. We say this despite their claim to find theoretical and empirical reasons to dismiss gap theories and despite the fact that they favor an alternative, epistemic account, which they call ‘vagueness as ignorance’. We present yet more experimental evidence that supports gap theories, and argue for a semantic/pragmatic alternative that unifies the gappy supervaluationary approach together with its glutty relative, the subvaluationary approach.
In (1991), Meinwald initiated a major change of direction in the study of Plato’s Parmenides and the Third Man Argument. On her conception of the Parmenides , Plato’s language systematically distinguishes two types or kinds of predication, namely, predications of the kind ‘x is F pros ta alla’ and ‘x is F pros heauto’. Intuitively speaking, the former is the common, everyday variety of predication, which holds when x is any object (perceptible object or Form) and F is a property (...) which x exempliﬁes or instantiates in the traditional sense. The latter is a special mode of predication which holds when x is a Form and F is a property which is, in some sense, part of the nature of that Form. Meinwald (1991, p. 75, footnote 18) traces the discovery of this distinction in Plato’s work to Frede (1967), who marks the distinction between pros allo and kath’ hauto predications by placing subscripts on the copula ‘is’. (shrink)
This paper discusses the general problem of translation functions between logics, given in axiomatic form, and in particular, the problem of determining when two such logics are "synonymous" or "translationally equivalent." We discuss a proposed formal definition of translational equivalence, show why it is reasonable, and also discuss its relation to earlier definitions in the literature. We also give a simple criterion for showing that two modal logics are not translationally equivalent, and apply this to well-known examples. Some philosophical morals (...) are drawn concerning the possibility of having two logical systems that are "empirically distinct" but are both translationally equivalent to a common logic. (shrink)
Gentzen’s and Jaśkowski’s formulations of natural deduction are logically equivalent in the normal sense of those words. However, Gentzen’s formulation more straightforwardly lends itself both to a normalization theorem and to a theory of “meaning” for connectives . The present paper investigates cases where Jaskowski’s formulation seems better suited. These cases range from the phenomenology and epistemology of proof construction to the ways to incorporate novel logical connectives into the language. We close with a demonstration of this latter aspect by (...) considering a Sheffer function for intuitionistic logic. (shrink)
previous theories and the relevance of those criticisms to the new accounts. Additionally, we have included a new section at the end, which gives some directions to literature outside of formal semantics in which the notion of mass has been employed. We looked at work on mass expressions in psycholinguistics and computational linguistics here, and we discussed some research in the history of philosophy and in metaphysics that makes use of the notion of mass.
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in (...) which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. (shrink)
Natural deduction is the type of logic most familiar to current philosophers, and indeed is all that many modern philosophers know about logic. Yet natural deduction is a fairly recent innovation in logic, dating from Gentzen and Ja?kowski in 1934. This article traces the development of natural deduction from the view that these founders embraced to the widespread acceptance of the method in the 1960s. I focus especially on the different choices made by writers of elementary textbooks?the standard conduits of (...) the method to a generation of philosophers?with an eye to determining what the ?essential characteristics? of natural deduction are. (shrink)
1: Linguistic and Epistemological Background 1 . 1 : Generic Reference vs. Generic Predication 1 . 2 : Why are there any Generic Sentences at all? 1 . 3 : Generics and Exceptions, Two Bad Attitudes 1 . 4 : Exceptions and Generics, Some Other Attitudes 1 . 5 : Generics and Intensionality 1 . 6 : Goals of an Analysis of Generic Sentences 1 . 7 : A Little Notation 1 . 8 : Generics vs. Explicit Statements of Regularities..
The Principle of Semantic Compositionality (sometimes called Frege''s Principle) is the principle that the meaning of a (syntactically complex) whole is a function only of the meanings of its (syntactic) parts together with the manner in which these parts were combined. This principle has been extremely influential throughout the history of formal semantics; it has had a tremendous impact upon modern linguistics ever since Montague Grammars became known; and it has more recently shown up as a guiding principle for a (...) certain direction in cognitive science.Despite the fact that The Principle is vague or underspecified at a number of points — such as what meaning is, what counts as a part, what counts as a syntactic complex, what counts as combination — this has not stopped some people from viewing The Principle as obviously true, true almost by definition. And it has not stopped other people from viewing The Principle as false, almost pernicious in its effect. And some of these latter theorists think that it is an empirically false principle while others think of it as a methodologically wrong-headed way to proceed. (shrink)
In this essay I will consider two theses that are associated with Frege,and will investigate the extent to which Frege really believed them.Much of what I have to say will come as no surprise to scholars of thehistorical Frege. But Frege is not only a historical figure; he alsooccupies a site on the philosophical landscape that has allowed hisdoctrines to seep into the subconscious water table. And scholars in a widevariety of different scholarly establishments then sip from thesedoctrines. I believe (...) that some Frege-interested philosophers at various ofthese establishments might find my conclusions surprising.Some of these philosophical establishments have arisen from an educationalmilieu in which Frege is associated with some specific doctrine at theexpense of not even being aware of other milieux where other specificdoctrines are given sole prominence. The two theses which I will discussillustrate this point. Each of them is called Frege''s Principle, but byphilosophers from different milieux. By calling them milieux I do not want to convey the idea that they are each located at some specificsocio-politico-geographico-temporal location. Rather, it is a matter oftheir each being located at different places on the intellectuallandscape. For this reason one might (and I sometimes will) call them(interpretative) traditions. (shrink)
Average‐NPs, such as the one in the title of this paper, have been claimed to be ‘linguistically identical’ to any other definite‐NPs but at the same time to be ‘semantically inconsistent’ with these other definite‐NPs. To some this is an ironclad proof of the irrelevance of semantics to linguistics. We argue that both of the initial claims are wrong: average‐NPs are not ‘linguistically identical’ to other definite‐NPs but instead show a number of interesting divergences, and we provide a plausible semantic (...) account for them that is not ‘semantically inconsistent’ with the account afforded other definite‐NPs but in fact blends quite nicely with one standard account of the semantics for NPs. (shrink)
Strawson described ‘descriptive metaphysics’, Bach described ‘natural language metaphysics’, Sapir and Whorf describe, well, Sapir-Whorﬁanism. And there are other views concerning the relation between correct semantic analysis of linguistic phenomena and the “reality” that is supposed to be thereby described. I think some considerations from the analyses of the mass-count distinction can shed some light on that very dark topic.
Philosophy of linguistics is the philosophy of science as applied to linguistics. This differentiates it sharply from the philosophy of language, traditionally concerned with matters of meaning and reference.
Fuzzy logics are systems of logic with infinitely many truth values. Such logics have been claimed to have an extremely wide range of applications in linguistics, computer technology, psychology, etc. In this note, we canvass the known results concerning infinitely many valued logics; make some suggestions for alterations of the known systems in order to accommodate what modern devotees of fuzzy logic claim to desire; and we prove some theorems to the effect that there can be no fuzzy logic which (...) will do what its advocates want. Finally, we suggest ways to accommodate these desires in finitely many valued logics. (shrink)
We investigate the notion of relevance as it pertains to ‘commonsense’, subjunctive conditionals. Relevance is taken here as a relation between a property (such as having a broken wing) and a conditional (such as birds typically fly). Specifically, we explore a notion of ‘causative’ relevance, distinct from ‘evidential’ relevance found, for example, in probabilistic approaches. A series of postulates characterising a minimal, parsimonious concept of relevance is developed. Along the way we argue that no purely logical account of relevance (even (...) at the metalevel) is possible. Finally, and with minimal restrictions, an explicit definition that agrees with the postulates is given. (shrink)
This volume showcases an interplay between leading philosophical and linguistic semanticists on the one side, and leading cognitive and developmental psychologists on the other side. The topic is a class of outstanding questions in the semanticists on the one side, and leading cognitive and developmental psychologists on the other side. The topic is a class of outstanding questions in the semantic and logical theories of generic statements and statements that employ mass terms by looking to the cognitive abilities of speakers (...) and of child language-learners. (shrink)
In 1934 a most singular event occurred. Two papers were published on a topic that had (apparently) never before been written about, the authors had never been in contact with one another, and they had (apparently) no common intellectual background that would otherwise account for their mutual interest in this topic.1 These two papers formed the basis for a movement in logic which is by now the most common way of teaching elementary logic by far, and indeed is perhaps all (...) that is known in any detail about logic by a number of philosophers (especially in North America). This manner of proceeding in logic is called ‘natural deduction’. And in its own way the instigation of this style of logical proof is as important to the history of logic as the discovery of resolution by Robinson in 1965, or the discovery of the logistical method by Frege in 1879, or even the discovery of the syllogistic by Aristotle in the fourth century BC. (shrink)
In ‘On Denoting’ and to some extent in ‘Review of Meinong and Others, Untersuchungen zur Gegenstandstheorie und Psychologie’, published in the same issue of Mind (Russell, 1905a,b), Russell presents not only his famous elimination (or contextual deﬁ nition) of deﬁ nite descriptions, but also a series of considerations against understanding deﬁ nite descriptions as singular terms. At the end of ‘On Denoting’, Russell believes he has shown that all the theories that do treat deﬁ nite descriptions as singular terms fall (...) logically short: Meinong’s, Mally’s, his own earlier (1903) theory, and Frege’s. (He also believes that at least some of them fall short on other grounds—epistemological and metaphysical—but we do not discuss these criticisms except in passing). Our aim in the present paper is to discuss whether his criticisms actually refute Frege’s theory. We ﬁ rst attempt to specify just what Frege’s theory is and present the evidence that has moved scholars to attribute one of three different theories to Frege in this area. We think that each of these theories has some claim to be Fregean, even though they are logically quite different from each other. This raises the issue of determining Frege’s attitude towards these three theories. We consider whether he changed his mind and came to replace one theory with another, or whether he perhaps thought that the different theories applied to different realms, for example, to natural language versus a language for formal logic and arithmetic. We do not come to any hard and fast conclusion here, but instead just note that all these theories treat deﬁ nite descriptions as singular terms, and that Russell proceeds as if he has refuted them all. After taking a brief look at the formal properties of the Fregean theories (particularly the logical status of various sentences containing nonproper deﬁ - nite descriptions) and comparing them to Russell’s theory in this regard, we turn to Russell’s actual criticisms in the above-mentioned articles to examine the extent to which the criticisms hold.. (shrink)
Simple mass nouns are words like ‘water’, ‘furniture’ and ‘gold’. We can form complex mass noun phrases such as ‘dirty water’, ‘leaded gold’ and ‘green grass’. I do not propose to discuss the problems in giving a characterization of the words that are mass versus those that are not. For the purposes of this paper I shall make the following decrees: (a) nothing that is not a noun or noun phrase can be mass, (b) no abstract noun phrases are considered (...) mass, (c) words like ‘thing’, ‘entity’ and ‘object’ are not mass, (d) I shall not consider such words as ‘stuff’, ‘substance or ‘matter’, (e) measures on mass nouns (like ‘gallon of gasoline’, ‘blade of grass’, etc.) are not considered, (f) plurals of count terms are not considered mass. Within these limitations, we can say generally that mass noun phrases are those phrases that ‘much’ can be prefexed to, by ‘many’ cannot be prefexed to, without an0maly.l Semantically, such phrases usually have the property of collectiveness- they are true of any sum of things of which they are true ; and of divisiveness - they are true of any part (down to a certain limit) of things of which they are true. All of this, however, is only ‘generally speaking’ - I shall mostly use only the simple examples given above and ignore the problems in giving a complete characterization of mass nouns. In the paper I want to discuss some problems involved in casting English sentences containing mass nouns into some artificial language; but in order to do this we should have some anchoring framework on which to justify or reject a given proposal. The problem of finding an adequate language can be viewed as a case of translation (from English to the artificial language), where the translation relation must meet certain requirements. I shall suggest five such requirements; others could be added. (shrink)
There has long been a history of studies investigating how people (“ordinary people”) perform on tasks that involve deductive reasoning. The upshot of these studies is that people characteristically perform some deductive tasks well but others badly. For instance, studies show that people will typically perform MP (“modus ponens”: from ‘If A then B’ and ‘A’, infer ‘B’) and bi-conditional MP (from: ‘A if and only if B’ and ‘A’, infer ‘B’) correctly when invited to make the inference and additionally (...) can discover of their own accord when such inferences are appropriate. On the other hand, the same studies show that people typically perform MT (“modus tollens”: from ‘If A then B’ and ‘not-B’, infer ‘not-A’) and biconditional MT badly. They not only do not recognize when it is appropriate to draw such inferences, but also they will balk at doing them even when they are told that they can make it. Related to these shortcomings seems to be the inability of people to understand that contrapositives are equivalent (that ‘If A then B’ is equivalent to ‘If not-B then not-A’). [Studies of people’s deductive inference-drawing abilities have a long history, involving many studies in the early 20th century concerning Aristotelian syllogisms. But the current spate of studies draws much of its impetus from Wason (Wason, 1968; see also Wason & Johnson-Laird, 1972). ] The general conclusion seems to be that there are specific areas where “ordinary people” do not perform very logically. This conclusion will not come as a surprise to teachers of elementary logic, who have long thought that the majority of “ordinary people” are inherently illogical and need deep and forceful schooling in order to overcome this flaw. (shrink)
Although resolution-based inference is perhaps the industry standard in automated theorem proving, there have always been systems that employed a different format. For example, the Logic Theorist of 1957 produced proofs by using an axiomatic system, and the proofs it generated would be considered legitimate axiomatic proofs; Wang’s systems of the late 1950’s employed a Gentzen-sequent proof strategy; Beth’s systems written about the same time employed his semantic tableaux method; and Prawitz’s systems of again about the same time are often (...) said to employ a natural deduction format. [See Newell, et al (1957), Beth (1958), Wang (1960), and Prawitz et al (1960)]. Like sequent proof systems and tableaux proof systems, natural deduction systems retain.. (shrink)
Roger Gibson has achieved as much as anyone else, indeed, more, in presenting and defending Quine’s philosophy. It is no surprise that the great man W.V. Quine himself said that in reading Gibson he gained a welcome perspective on his own work. His twin books The Philosophy of W.V. Quine and Enlightened Empiricism have no rivals. We are all indebted to Roger. The essay that follows is intended not only to honor him but also to continue a theme that runs (...) throughout his (and Quine’s) work, namely, the seamless division between science and philosophy. The techniques we invoke are consonant with the naturalistic conception of language that central themes of Professor Gibson’s writings, namely, that language is “a social art to be studied empirically” (Enlightened Empiricism, p. 64). (shrink)
Different researchers use "the philosophy of automated theorem p r o v i n g " t o cover d i f f e r e n t concepts, indeed, different levels of concepts. Some w o u l d count such issues as h o w to e f f i c i e n t l y i n d e x databases as part of the philosophy of automated theorem p r o v i n g . (...) Others wonder about whether f o r m u l a s should be represented as strings or as trees or as lists, and call this part of the philosophy of automated theorem p r o v i n g . Yet others concern themselves w i t h what k i n d o f search should b e embodied i n a n y automated theorem prover, or to what degree any automated theorem prover should resemble Prolog. Still others debate whether natural deduction or semantic tableaux or resolution is " b e t t e r " , a n d c a l l t h i s a part of the p h i l o s o p h y of automated theorem p r o v i n g . Some people wonder whether automated theorem p r o v i n g should be " h u m a n oriented" or "machine o r i e n t e d " — sometimes arguing about whether the internal p r o o f methods should be " h u m a n - I i k e " or not, sometimes arguing about whether the generated proof should be output in a f o r m u n d e r s t a n d a b l e by p e o p l e , and sometimes a r g u i n g a b o u t the d e s i r a b i l i t y o f h u m a n intervention in the process of constructing a proof. There are also those w h o ask such questions as whether we s h o u l d even be concerned w i t h completeness or w i t h soundness of a system, or perhaps we should instead look at very efficient (but i n c o m p l e t e ) subsystems or look at methods of generating models w h i c h might nevertheless validate invalid arguments. A n d a l l of these have been v i e w e d as issues in the philosophy of automated theorem proving. Here, I w o u l d l i k e to step back from such i m p l e m e n t - ation issues and ask: " W h a t do we really think we are doing when we w r i t e an automated theorem prover?" My reflections are perhaps idiosyncratic, but I do think that they put the different researchers* efforts into a broader perspective, and give us some k i n d of handle on w h i c h directions we ourselves m i g h t w i s h to pursue when constructing (or extending) an automated theorem proving system. A logic is defined to be (i) a vocabulary and formation rules ( w h i c h tells us w h a t strings of symbols are w e l l - formed formulas in the logic), and ( i i ) a definition of ' p r o o f in that system ( w h i c h tells us the conditions under which an arrangement of formulas in the system constitutes a proof). Historically speaking, definitions of ' p r o o f have been given in various different manners: the most c o m m o n have been H i l b e r t - s t y l e ( a x i o m a t i c ) , Gentzen-style (consecution, or sequent), F i t c h - s t y l e (natural deduction), and Beth-style (tableaux).. (shrink)
Vagueness: an expression is vague if and only if it is possible that it give rise to a “borderline case.” A borderline case is a situation in which the application of a particular expression to a (name of) a particular object does not generate an expression with a definite TRUTH-VALUE. That is, the piece of language in question neither applies to the object nor fails to apply. Although such a formulation leaves it open what the pieces of language might be (...) (whole sentences, individual words, NAMES or SINGULAR TERMS, PREDICATES or GENERAL TERMS), most discussions have focussed on vague general terms and have considered other types of terms to be non-vague. (Exceptions to this have called attention to the possibility of vague objects, thereby making the designation relation for singular terms be vague). The formulation also leaves open the possible causes for the expression not to have a definite truth value. If this indeterminacy is due to there being insufficient information available to determine applicability or non-applicability of the term (that is, we’re convinced the term either does or doesn’t apply, but we just don’t have enough information to determine which), then this is sometimes called “epistemic vagueness.” It is somewhat misleading to call this vagueness, for unlike true vagueness, this epistemic vagueness disappears if more information is brought into the situation. (‘There are 1.89∞106 stars in the sky’ epistemically vague but is not vague in the generally accepted sense of the term). ‘Vagueness’ may also be used to characterize non-linguistic items such as CONCEPTS, MEMORIES, and OBJECTS ... as well as such semi-linguistic items as STATEMENTS and PROPOSITIONS. Many of the issues involved in discussing the topic of vagueness impinge upon other philosophical topics, such as the existence of TRUTH-VALUE GAPS (declarative sentences which are neither TRUE nor FALSE) and the plausibility of MANY-VALUED LOGIC.. (shrink)
Semantic Compositionality is the principle that the meaning of a syntactically complex expression is a function only of the meanings of its syntactic components together with their syntactic mode of combination Various scholars have argued against this Principle in cluding the present author in earlier works One of these arguments was the Argument from Ambiguity which will be of concern in the present article Opposed to the considerations raised against the Principle are certain formal arguments that purport to show that (...) there is no empirical content to the Principle One of these formal ar guments makes use of the notion of free algebras The present article investigates the relationship between these two types of argument.. (shrink)
We report empirical results on factors that influence how people reason with default rules of the form "Most x's have property P", in scenarios that specify information about exceptions to these rules and in scenarios that specify default-rule inheritance. These factors include (a) whether the individual, to which the default rule might apply, is similar to a known exception, when that similarity may explain why the exception did not follow the default, and (b) whether the problem involves classes of naturally (...) occurring kinds or classes of artifacts. We consider how these findings might be integrated into formal approaches to default reasoning and also consider the relation of this sort of qualitative default reasoning to statistical reasoning. (shrink)
Larry Horn is justifiably famous for his work on the semantics of the English conjunction or and both its relationship to the formal logic truth functions ∨ and @ (“inclusive” and “exclusive” disjunction respectively1) and its relationship to the ways people employ or in natural discourse. These interests have been present since his 1972 dissertation, where he argued for a “scalar implicature-based” account of many of these relationships as opposed to a presuppositional account. They have surfaced in his “Greek Grice” (...) paper (Horn 1973) as well as in his Negation book (Horn 1989) and his recent “Border Wars” paper (Horn, forthcoming) where he defends the position that there are two types of implicatures at work here: Q- implicatures based on Grice’s first maxim of Quantity (“Say Enough”) and R-implicatures based on Grice’s second maxim of Quantity (“Don’t Say Too Much”). In a nutshell, the idea is that when a speaker employs a sentence with a disjunction, the meaning (that is, the semantic value) of the or is inclusive. With careful and judicious use of the Q- and R-implicatures, Larry’s theory allows the hearer (often) to infer that the speaker wanted to convey an exclusive disjunction. (shrink)
Many different kinds of items have been called vague, and so-called for a variety of different reasons. Traditional wisdom distinguishes three views of why one might apply the epitaph "vague" to an item; these views are distinguished by what they claim the vagueness is due to. One type of vagueness, The Good, locates vagueness in language, or in some representational system -- for example, it might say that certain predicates have a range of applicability. On one side of the range (...) are those cases to which the predicate clearly applies and on the other side of the range are those cases where the negation of the predicate clearly applies. But there is no sharp cutoff place along the range where the one range turns into the other. Most examples of The Good are those terms which describe some continuum -- such as bald describes a continuum of the ratio of hairs per cm2 on the head. But not all work this way. Alston (1968) points to terms like religion invoking a number of criteria the joint applicability of which ensures that the activity in question is a religion and the failure of all to apply ensures that it is not a religion. But when only some middling number of the criteria are fulfilled, the term religion neither applies nor fails to apply. Some accounts of "family resemblance" and "open texture" might also fit this picture. Such a view is often called a "representational account of vagueness". Another conception of vagueness, The Bad, locates vagueness as a property of discourses, of memories, and of certain philosophers and their papers, etc. This sort of vagueness occurs when the information available does not allow one to tell, for example, that a certain sentence is true, but also does not allow one to determine that it is false. It occurs when the information available does not allow one to claim that a predicate applies to a name, but also does not allow one to claim that the negation of the predicate applies to that name.. (shrink)
A certain direction in cognitive science has been to try to “ground” public language statements in some species of mental representation. A central tenet of this trend is that communication – that is, public language – succeeds (when it does) because the elements of this public language are in some way correlated with mental items of both the speaker and the audience so that the mental state evoked in the audience by the use of that piece of public language is (...) the one that the speaker wanted to evoke. The “meaning”, therefore, of an utterance – and of the parts of an utterance, such as individual sentences and their parts, the individual words, etc. – is, in this view, some mental item. Successful communication requires that there be widespread agreement amongst speakers of the same public language as to the mental entities that are correlated with any particular public words. Such a view of meaning is variously called “internalist” or “cognitive” or “subjectivist” or “solipsistic” or (sometimes) “representationalist” (these terms having, however, further connotations which set them apart from one another in other ways), and can be found in a wide variety of writers who do not agree on many other things. It is opposed to views that take the meaning of an utterance to be an item of “reality,” however defined. In different writers this latter view is called “externalist” or “objectivist” or “realist” or (sometimes) “represent-ationalist,” always with the idea that there is something other (or at least, more) than the mental state of speakers and hearers that determines meaning. The literature is rife with arguments between internalists vs. externalists, subjectivists vs. objectivists, cognitivists vs. realists, on such topics as “truth” and “synonymy” and “twin earth” and “arthritis” (to mention only a few).. (shrink)
In this paper we report preliminary results on how people revise or update a previously held set of beliefs. When intelligent agents learn new things which conflict with their current belief set, they must revise their belief set. When the new information does not conflict, they merely must update their belief set. Various AI theories have been proposed to achieve these processes. There are two general dimensions along which these theories differ: whether they are syntactic-based or model-based, and what constitutes (...) a minimal change of beliefs. This study investigates how people update and revise semantically equivalent but syntactically distinct belief sets, both in symbolic-logic problems and in quasi-real-world problems. Results indicate that syntactic form affects belief revision choices. In addition, for the symbolic problems, subjects update and revise semantically-equivalent belief sets identically, whereas for the quasi-real-world problems they both update and revise differently. Further, contrary to earlier studies, subjects are sometimes reluctant to accept that a sentence changes from false to true, but they are willing to accept that it would change from true to false. (shrink)
In an earlier paper entitled Synonymous Logics, the authors attempted to show that there are two modal logics so that each is exactly translatable into the other, but they are not translationally equivalent. Unfortunately, there is an error in the proof of this result. The present paper provides a new example of two such logics, and a proof of the result claimed in the earlier paper.