Haack, S. Is truth flat or bumpy?--Chihara, C. S. Ramsey's theory of types.--Loar, B. Ramsey's theory of belief and truth.--Skorupski, J. Ramsey on Belief.--Hookway, C. Inference, partial belief, and psychological laws.--Skyrms, B. Higher order degrees of belief.--Mellor, D. H. Consciousness and degrees of belief.--Blackburn, S. Opinions and chances.--Grandy, R. E. Ramsey, reliability, and knowledge.--Cohen, L. J. The problem of natural laws.--Giedymin, J. Hamilton's method in geometrical optics and Ramsey's view of theories.
The so called Ramseytest is a semantic recipe for determining whether a conditional proposition is acceptable in a given state of belief. Informally, it can be formulated as follows: (RT) Accept a proposition of the form "if A, then C" in a state of belief K, if and only if the minimal change of K needed to accept A also requires accepting C. In Gärdenfors (1986) it was shown that the Ramseytest is, in the (...) context of some other weak conditions, on pain of triviality incompatible with the following principle, which was there called the preservation criterion: (P) If a proposition B is accepted in a given state of belief K and the proposition A is consistent with the beliefs in K, then B is still accepted in the minimal change of K needed to accept A. (RT) provides a necessary and sufficient criterion for when a 'positive' conditional should be included in a belief state, but it does not say anything about when the negation of a conditional sentence should be accepted. A very natural candidate for this purpose is the following negative Ramseytest: (NRT) Accept the negation of a proposition of the form "if A, then C" in a consistent state of belief K, if and only if the minimal change of K needed to accept A does not require accepting C. This note shows that (NRT) leads to triviality results even in the absence of additional conditions like (P). (shrink)
In this paper, I analyse a finding by Riggs and colleagues that there is a close connection between people’s ability to reason with counterfactual conditionals and their capacity to attribute false beliefs to others. The result indicates that both processes may be governed by one cognitive mechanism, though false belief attribution seems to be slightly more cognitively demanding. Given that the common denominator for both processes is suggested to be a form of the Ramseytest, I investigate whether (...) Stalnaker’s semantic theory of conditionals, which was inspired by the Ramseytest, may provide the basis for a psychologically plausible model of belief ascription. The analysis I propose will shed some new light on the developmental discrepancy between counterfactual reasoning and false belief ascription. (shrink)
Epistemic conditionals have often been thought to satisfy the Ramseytest (RT): If A, then B is acceptable in a belief state G if and only if B should be accepted upon revising G with A. But as Peter Gärdenfors has shown, RT conflicts with the intuitively plausible condition of Preservation on belief revision. We investigate what happens if (a) RT is retained while Preservation is weakened, or (b) vice versa. We also generalize Gärdenfors' approach by treating (...) belief revision as a relation rather than as a function.In our semantic approach, the same relation is used to model belief revision and to give truth-conditions for conditionals. The approach validates a weak version of the RamseyTest (WRR) — essentially, a restriction of RT to maximally consistent belief states. (shrink)
We present a semantic analysis of the Ramseytest, pointing out its deep underlying flaw: the tension between the “static” nature of AGM revision (which was originally tailored for revision of only purely ontic beliefs, and can be applied to higher-order beliefs only if given a “backwards-looking” interpretation) and the fact that, semantically speaking, any Ramsey conditional must be a modal operator (more precisely, a dynamic-epistemic one). Thus, a belief about a Ramsey conditional is in fact (...) a higher-order belief, hence the AGM revision postulates are not applicable to it, except in their “backwards-looking” interpretation. But that interpretation is consistent only with a restricted (weak) version of Ramsey’s test (in-applicable to already revised theories). The solution out of the conundrum is twofold: either accept only the weakRamseytest; or replace the AGM revision operator ∗ by a truly “dynamic” revision operator ⊗, which will not satisfy the AGM axioms, but will do something better: it will “keep up with reality”, correctly describing revision with higher-order beliefs. (shrink)
This article provides a discussion of the principle of transmission of evidential support across entailment from the perspective of belief revision theory in the AGM tradition. After outlining and briefly defending a small number of basic principles of belief change, which include a number of belief contraction analogues of the Darwiche-Pearl postulates for iterated revision, a proposal is then made concerning the connection between evidential beliefs and belief change policies in rational agents. This proposal is found to be suffcient to (...) establish the truth of a much-discussed intuition regarding transmission failure. (shrink)
According to the RamseyTest hypothesis the conditional claim that if A then B is credible just in case it is credible that B, on the supposition that A. If true the hypothesis helps explain the way in which we evaluate and use ordinary language conditionals. But impossibility results for the RamseyTest hypothesis in its various forms suggest that it is untenable. In this paper, I argue that these results do not in fact (...) have this implication, on the grounds that similar results can be proved without recourse to the Ramseytest hypothesis. Instead they show that a number of well entrenched principles of rational belief and belief revision do not apply to conditionals. (shrink)
Proponents of the projection strategy take an epistemic rule for the evaluation of English conditionals, the Ramseytest, as clue to the truth-conditional semantics of conditionals. They also construe English conditionals as stronger than the material conditional. Given plausible assumptions, however, the Ramseytest induces the semantics of the material conditional. The alleged link between Ramseytest and truth conditions stronger than those of the material conditional can be saved by construing (...) conditionals as ternary, rather than binary, propositional functions with a hidden contextual parameter. But such a ternary construal raises problems of its own. (shrink)
In contemporary discussions of the RamseyTest for conditionals, it is commonly held that (i) supposing the antecedent of a conditional is adopting a potential state of full belief, and (ii) Modus Ponens is a valid rule of inference. I argue on the basis of Thomason Conditionals (such as ‘If Sally is deceiving, I do not believe it’) and Moore’s Paradox that both claims are wrong. I then develop a double-indexed Update Semantics for conditionals which takes these (...) two results into account while doing justice to the key intuitions underlying the RamseyTest. The semantics is extended to cover some further phenomena, including the recent observation that epistemic modal operators give rise to something very like, but also very unlike, Moore’s Paradox. (shrink)
Chalmers and Hájek argue that on an epistemic reading of Ramsey’s test for the rational acceptability of conditionals, it is faulty. They claim that applying the test to each of a certain pair of conditionals requires one to think that one is omniscient or infallible, unless one forms irrational Moore-paradoxical beliefs. I show that this claim is false. The epistemic Ramseytest is indeed faulty. Applying it requires that one think of anyone as all-believing and (...) if one is rational, to think of anyone as infallible-if-rational. But this is not because of Moore-paradoxical beliefs. Rather it is because applying the test requires a certain supposition about conscious belief. It is important to understand the nature of this supposition. (shrink)
Peter G¨ ardenfors proved a theorem purporting to show that it is impossible to adjoin to the AGM -postulates for belief-revision a principle of monotonicity for revisions. The principle of monotonicity in question is implied by the Ramseytest for conditionals. So G¨.
There is an important class of conditionals whose assertibility conditions are not given by the Ramseytest but by an inductive extension of that test. Such inductive Ramsey conditionals fail to satisfy some of the core properties of plain conditionals. Associated principles of nonmonotonic inference should not be assumed to hold generally if interpretations in terms of induction or appeals to total evidence are not to be ruled out.
Abstract I analyse the relationship between the RamseyTest (RT) for the acceptance of indicative conditionals and the so-called problem of decision-instability. In particular, I argue that the situations which allegedly bring about this problem are troublesome just in case the relevant conditionals are evaluated by non-suppositional versions, e.g. causal/evidential, of the test. In contrast, a suppositional RT, by highlighting the metacognitive nature of the evaluation of indicative conditionals, allows an agent to run a simulation of such (...) evaluation, without yet committing her to the acceptance of such conditionals. I conclude that a suppositional interpretation of RT is superior to its nonsuppositional counterparts and by briefly showing that a suppositional RT is compatible with a deliberational decision theory. (shrink)
In ‘A Defence of the RamseyTest’, Richard Bradley makes a case for not concluding from the famous impossibility results regarding the RamseyTest — the thesis that a rational agent believes a conditional if he would believe the consequent upon learning the antecedent — that the thesis is false. He lays the blame instead on one of the other premisses in these results, namely the Preservation condition. In this paper, we explore how this condition can (...) be weakened by strengthening the notion of consistency which appears in it. After considering the effects of such weakenings for Bradley's argument, we propose a refinement of the Preservation condition which does not fall prey to Bradley's argument nor to Gärdenfors's impossibility theorem. We briefly compare it to Bradley's suggested restriction of Preservation. (shrink)
The purpose of this note is to formulate some weaker versions of the so called Ramseytest that do not entail the following unacceptable consequenceIf A and C are already accepted in K, then if A, then C is also accepted in K. and to show that these versions still lead to the same triviality result when combined with a preservation criterion.
We introduce two new belief revision axioms: partial monotonicity and consequence correctness. We show that partial monotonicity is consistent with but independent of the full set of axioms for a Gärdenfors belief revision sytem. In contrast to the Gärdenfors inconsistency results for certain monotonicity principles, we use partial monotonicity to inform a consistent formalization of the Ramseytest within a belief revision system extended by a conditional operator. We take this to be a technical dissolution of the (...) well-known Gärdenfors dilemma.In addition, we present the consequential correctness axiom as a new measure of minimal revision in terms of the deductive core of a proposition whose support we wish to excise. We survey several syntactic and semantic belief revision systems and evaluate them according to both the Gärdenfors axioms and our new axioms. Furthermore, our algebraic characterization of semantic revision systems provides a useful technical device for analysis and comparison, which we illustrate with several new proofs. (shrink)
According to the RamseyTest, conditionals reflect changes of beliefs: α > β is accepted in a belief state iff β is accepted in the minimal revision of it that is necessary to accommodate α. Since Gärdenfors’s seminal paper of 1986, a series of impossibility theorems (“triviality theorems”) has seemed to show that the Ramseytest is not a viable analysis of conditionals if it is combined with AGM-type belief revision models. I argue that it is (...) possible to endorse that Ramseytest for conditionals while staying true to the spirit of AGM. A main focus lies on AGM’s condition of Preservation according to which the original belief set should be fully retained after a revision by information that is consistent with it. I use concrete representations of belief states and (iterated) revisions of belief states as semantic models for (nested) conditionals. Among the four most natural qualitative models for iterated belief change, two are identified that indeed allow us to combine the Ramseytest with Preservation in the language containing only flat conditionals of the form α > β. It is shown, however, that Preservation for this simple language enforces a violation of Preservation for nested conditionals of the form α > (β > γ). In such languages, no two belief sets are ordered by strict subset inclusion. I argue that it has been wrong right from the start to expect that Preservation holds in languages containing nested conditionals. (shrink)
Test for the rational acceptance of conditionals and it still incites much of the interest in conditional reasoning. For instance, the test has been considered as a good starting point for several formal semantics for conditionals. Furthermore, its ramifications have important implications for several disciplines, from logic and artificial intelligence to decision theory and psychology. This volume presents a small but fine sample of the state of the art of such multifarious area of research.
Peter Gärdenfors has proved (Philosophical Review, 1986) that the Ramsey rule and the methodologically conservative Preservation principle are incompatible given innocuous-looking background assumptions about belief revision. Gärdenfors gives up the Ramsey rule; I argue for preserving the Ramsey rule and interpret Gärdenfors's theorem as showing that no rational belief-reviser can avoid reasoning nonmonotonically. I argue against the Preservation principle and show that counterexamples to it always involve nonmonotonic reasoning. I then construct a new formal model of belief (...) revision that does accommodate nonmonotonic reasoning. (shrink)
This book by one of the world's foremost philosophers in the fields of epistemology and logic offers an account of suppositional reasoning relevant to practical deliberation, explanation, prediction and hypothesis testing. Suppositions made 'for the sake of argument' sometimes conflict with our beliefs, and when they do, some beliefs are rejected and others retained. Thanks to such belief contravention, adding content to a supposition can undermine conclusions reached without it. Subversion can also arise because suppositional reasoning is ampliative. These two (...) types of nonmonotonic logic are the focus of this book. A detailed comparison of nonmonotonicity appropriate to both belief contravening and ampliative suppositional reasoning reveals important differences that have been overlooked. (shrink)
Statistical significance testing has its problems, but so do the alternatives that are proposed; and the alternatives may be both more cumbersome and less informative. Significance tests remain legitimate aspects of the rhetoric of scientific persuasion.
This paper starts by criticising some olderaccounts of conditionals based on the so-called `RamseyTest', and ends by proposing their replacement, in part with a material account, in part with a probabilistic account using epsilon terms. The combined replacement is in fact closer to Ramsey's ideas. But there is also a resemblance between the latter and a more recent account of conditionals, which relates some of them to causality. (...) class='Hi'> The comparison provides a basis for assessment of the proposed replacement. (shrink)
I formulate a counterfactual version of the notorious ‘RamseyTest’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
One of the main applications of the logic of theory change is to the epistemic analysis of conditionals via the so-called Ramseytest. In the first part of the present note this test is studied in the limiting case where the theory being revised is inconsistent, and it is shown that this case manifests an intrinsic incompatibility between the Ramseytest and the AGM postulate of success. The paper then analyses the use of the postulate (...) of success, and a weakening of it, generating axioms of conditional logic via the test, and it is shown that for certain purposes both success and weak success are quite superfluous. This suggests the proposal of abandoning both success and weak success entirely, thus permitting retention of the postulate of preservation discarded by Gärdenfors. (shrink)
The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original Turing Test (TT). I consider Harnad's Total Turing Test (TTT), which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human (...) case. However, I argue that the TTT is still too weak, because it only tests the capabilities of particular tokens within a preexisting context of intelligent behavior. What is needed is a test of the cognitive type, as manifested through a number of exemplary tokens, in order to confirm that the cognitive type is able to produce the context of intelligent behavior presupposed by tests such as the TT and TTT. (shrink)
Jerry Fodor has defended the claim that psychological theories should appeal to narrow rather than wide intentional properties. One of his arguments relies upon the cross contexts test, a test that purports to determine whether two events have the same causally relevant properties. Critics have charged that this test is too weak, since it counts certain genuinely explanatory relational properties in science as being causally irrelevant. Further, it has been claimed, the (...) class='Hi'>test is insensitive to the fact that special scientific laws allow for exceptions which do not undermine those laws. This paper refines the cross contexts test to meet these objections while still allowing it to play its role in Fodor. (shrink)
Frank Ramsey (1931) wrote: If two people are arguing 'if p will q?' and both are in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q. We can say that they are fixing their degrees of belief in q given p. Let us take the first sentence the way it is often taken, as proposing the following test for the acceptability of an indicative conditional: ‘If p (...) then q’ is acceptable to a subject S iff, were S to accept p and consider q, S would accept q. Now consider an indicative conditional of the form (1) If p, then I believe p. Suppose that you accept p and consider ‘I believe p’. To accept p while rejecting ‘I believe p’ is tantamount to accepting the Moore-paradoxical sentence ‘p and I do not believe p’, and so is irrational. To accept p while suspending judgment about ‘I believe p’ is irrational for similar reasons. So rationality requires that if you accept p and consider ‘I believe p’, you accept ‘I believe p’. (shrink)
Because most chemical reactions, by definition, cannot avoid breaking of bonds, weakly bonded species exist fleetingly in almost every chemical change. Historically, chemical quantum mechanics was aimed at explaining the nature of strong bonds. The theory involved a number of approximations to the full solution of the Schrödinger equation. The study of non‐Kekulé molecules provides an opportunity to test whether modern quantum chemical computations are competent to deal with the nature of molecules with very weak bonds. †To contact (...) the author, please write to: Department of Chemistry, Yale University, New Haven, CT: 06520‐8107; e‐mail: firstname.lastname@example.org. (shrink)
Frank Ramsey writes: If two people are arguing ‘if p will q?’ and both are in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q. We can say that they are fixing their degrees of belief in q given p. (1931) Chalmers and Hájek write: Let us take the first sentence [of Ramsey] the way it is often taken, as proposing the following test for the (...) acceptability of an indicative conditional: ‘if p then q’ is acceptable to a subject S iff, were S to accept p and consider q, S would accept q. (shrink)
In this paper I examine the counterfactual test for legislative intention as used in Riggs v. Palmer. The distinction between the speaker's meaning approach and the constructive interpretation approach to statutory interpretation, as made by Dworkin in Law's Empire, is explained. I argue that Dworkin underestimates the potential of the counterfactual test in making the speaker's meaning approach more plausible. I also argue that Dworkin's reasons for rejecting the counterfactual test, as proposed in Law's (...) Empire, are either too weak or unsound. A deeper reason for rejecting the counterfactual test as a method for the speaker's meaning approach is proposed in this paper. The difference between the counterfactual test and other tests for legislative intention which seem also to make use of counterfactual conditions in explained. (shrink)
In 1950, Alan Turing proposed his eponymous test based on indistinguishability of verbal behavior as a replacement for the question "Can machines think?" Since then, two mutually contradictory but well-founded attitudes towards the Turing Test have arisen in the philosophical literature. On the one hand is the attitude that has become philosophical conventional wisdom, viz., that the Turing Test is hopelessly flawed as a sufficient condition for intelligence, while on the other hand is the overwhelming sense that (...) were a machine to pass a real live full-fledged Turing Test, it would be a sign of nothing but our orneriness to deny it the attribution of intelligence. The arguments against the sufficiency of the Turing Test for determining intelligence rely on showing that some extra conditions are logically necessary for intelligence beyond the behavioral properties exhibited by an agent under a Turing Test. Therefore, it cannot follow logically from passing a Turing Test that the agent is intelligent. I argue that these extra conditions can be revealed by the Turing Test, so long as we allow a very slight weakening of the criterion from one of logical proof to one of statistical proof under weak realizability assumptions. The argument depends on the notion of interactive proof developed in theoretical computer science, along with some simple physical facts that constrain the information capacity of agents. Crucially, the weakening is so slight as to make no conceivable difference from a practical standpoint. Thus, the Gordian knot between the two opposing views of the sufficiency of the Turing Test can be cut. (shrink)
Weak links, in the form of inadequacies in both reasoning and supporting evidence, exist at several critical steps in the derivation of an hierarchical concept of evolution from punctuated equilibria. Punctuation itself is predicated on a distorted reading of phyletic change as phyletic gradualism, and of allopatric speciation as the instantaneous formation of unchanging typological taxa. The concept of punctuation is further confounded by the indescriminate employment of the same term to denote both a causal explanation for evolutionary change (...) and an outcome of substantiated evolutionary processes. Even when the intended usage for the term is specified, each denotation of punctuation entails respective drawbacks. As a causal explanation, punctuation clearly belongs to the class of quantum theories with all their attendant impedimenta, including special salsatory non-adaptive mechanisms of evolutionary change. Redefinition of punctuation as a pattern of morphologic change reduces it to one possible outcome of known microevolutioanry processes, thus obviating any need for an hierarchical explanation of macroevolution. While vacillation between usages has preserved the term in the literature, the end result of this obfuscation has been a circle of faulty reasoning in which the pattern of punctuation is invoked as its own proof. Widespread confusion concerning what constitutes an adequate test of punctuation is directly attributable to imprecision in both the original and revised formulations of the concept.The argument for species-level selection is based on the typological and philosphically flawed premise of species as individuals, and further requires the hypothesis of heritable emergent properties, for which empirical evidence is lacking. (shrink)
In this paper it is argued that three of the most prominent theories of conditional acceptance face very serious problems. David Lewis' concept of imaging, the Ramseytest and Jonathan Bennett's recent hybrid view all face viscous regresses, or they either employ unanalyzed components or depend upon an implausibly strong version of doxastic voluntarism.
Chow sets his version of statistical significance testing in an impoverished context of “theory corroboration” that explicitly excludes well-posed theories admitting of strong support by precise empirical evidence. He demonstrates no scientific usefulness for the problematic procedure he recommends instead. The important role played by significance testing in today's behavioral and brain sciences is wholly inconsistent with the rhetoric he would enforce.
Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal.
The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to (...) fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A – a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ``Lovelace Test'' in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds. (shrink)
The standard interpretation of the imitation game is defended over the rival gender interpretation though it is noted that Turing himself proposed several variations of his imitation game. The Turing test is then justified as an inductive test not as an operational definition as commonly suggested. Turing's famous prediction about his test being passed at the 70% level is disconfirmed by the results of the Loebner 2000 contest and the absence of any serious Turing (...)test competitors from AI on the horizon. But, reports of the death of the Turing test and AI are premature. AI continues to flourish and the test continues to play an important philosophical role in AI. Intelligence attribution, methodological, and visionary arguments are given in defense of a continuing role for the Turing test. With regard to Turing's predictions one is disconfirmed, one is confirmed, but another is still outstanding. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which (...) iswhich? Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax â a study of relations among symbols (including meanings) â and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by (...) modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding âunderstanding a domain in terms of itself â is syntactic understanding. (3) An internal (or narrow ), first-person point of view makes an external (or wide ), third-person point of view otiose for purposes of understanding cognition. (shrink)
The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing''s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the `other minds'' problem, and similar topics in philosophy of (...) mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic. (shrink)
Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
The Turing Test is one of the most disputed topics in artiﬁcial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philo- sophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing’s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the ‘other minds’ problem, and similar topics in philosophy (...) of mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an inﬂuential and controversial topic. (shrink)
The main factor of intelligence is defined as the ability tocomprehend, formalising this ability with the help of new constructsbased on descriptional complexity. The result is a comprehension test,or C-test, which is exclusively defined in computational terms. Due toits absolute and non-anthropomorphic character, it is equally applicableto both humans and non-humans. Moreover, it correlates with classicalpsychometric tests, thus establishing the first firm connection betweeninformation theoretical notions and traditional IQ tests. The TuringTest is compared with the C- (...) class='Hi'>test and the combination of the two isquestioned. In consequence, the idea of using the Turing Test as apractical test of intelligence should be surpassed, and substituted bycomputational and factorial tests of different cognitive abilities, amuch more useful approach for artificial intelligence progress and formany other intriguing questions that present themselves beyond theTuring Test. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), (...) that consequently a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence.".
Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner Turing Test competition, which appeared a decade earlier in Communications of the ACM (Shieber 1994a, 1994b; Loebner 1994).1 With this collection, I expect it to become equally well known to philosophers.
We investigate a variant of the variable convention proposed at Tractatus 5.53ff for the purpose of eliminating the identity sign from logical notation. The variant in question is what Hintikka has called the strongly exclusive interpretation of the variables, and turns out to be what Ramsey initially (and erroneously) took to be Wittgenstein's intended method. We provide a tableau calculus for this identity-free logic, together with soundness and completeness proofs, as well as a proof of mutual interpretability with first-order (...) logic with identity. (shrink)
The age at which children acquire the concept of belief is a subject of debate. Many scholars claim that children master beliefs when they are able to pass the false belief test, around their fourth year of life. However, recent experiments show that children implicitly attribute beliefs even earlier. The dispute does not only concern the empirical issue of discovering children’s early cognitive abilities. It also depends on the kind of capacities that we associate to the very concept. I (...) claim that concept possession must be understood in terms of the gradual development of the abilities that underlie the concept in question. I also claim that the last step to possess the concept of belief requires children to understand how beliefs and desires are used in everyday explanations of people’s actions. Thus, I suggest that understanding folk psychology as an explanatory theory is what children lack when they fail the false belief test. (shrink)
The problem of computational complexity of semantics for some natural language constructions – considered in [M. Mostowski, D. Wojtyniak 2004] – motivates an interest in complexity of Ramsey quantifiers in finite models. In general a sentence with a Ramsey quantifier R of the following form Rx, yH(x, y) is interpreted as ∃A(A is big relatively to the universe ∧A2 ⊆ H). In the paper cited the problem of the complexity of the Hintikka sentence is reduced to the problem (...) of computational complexity of the Ramsey quantifier for which the phrase “A is big relatively to the universe” is interpreted as containing at least one representative of each equivalence class, for some given equvalence relation. In this work we consider quantifiers Rf, for which “A is big relatively to the universe” means “card(A) > f (n), where n is the size of the universe”. Following [Blass, Gurevich 1986] we call R mighty if Rx, yH(x, y) defines N P – complete class of finite models. Similarly we say that Rf is N P –hard if the corresponding class is N P –hard. We prove the following theorems. (shrink)
This paper argues that the Turing test is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and (...) participants' relative degrees of power and status (Holmes, 1992). In the case of the Loebner Contest, a present day version of the Turing test, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure. (shrink)
Our paper presents a novel theory of weak crossover effects, based entirely on quantifier scope preferences and their consequences for variable binding. The structural notion of 'crossover' play no role. We develop a theory of scope preferences which ascribes a central role to the AGR-P System.
The Turing Test is a verbal-behavioral operational criterion of artificial intelligence. If a machine can participate in question–and–answer conversation adequately enough to deceive an intelligent interlocutor, then it has intelligent information processing abilities. Robert M. French has argued that recent discoveries in cognitive science about subcognitive processes involving associational primings prove that the Turing Test cannot provide a satisfactory criterion of machine intelligence, that Turing's prediction concerning the feasibility of building machines to play the imitation game successfully is (...) false, and that the test should be rejected as ethnocentric and incapable of measuring kinds and degrees of nonhuman intelligence. But French's criticism is flawed, because it requires Turing's sufficient conditional criterion of intelligence to serve as a necessary condition. Turing's Test is defended against these objections, and French's claim that the test ought to be rejected because machines cannot pass it is deemed unscientific, resting on the empirically unwarranted assumption that intelligent machines are possible. (shrink)
This paper presents the results of an experiment on mutual versus common knowledge of advice in a two-player weak-link game with random matching. Our experimental subjects play in pairs for thirteen rounds. After a brief learning phase common to all treatments, we vary the knowledge levels associated with external advice given in the form of a suggestion to pick the strategy supporting the payoff-dominant equilibrium. Our results are somewhat surprising and can be summarized as follows: in all our treatments (...) both the choice of the efficiency-inducing action and the percentage of efficient equilibrium play are higher with respect to the control treatment, revealing that even a condition as weak as mutual knowledge of level 1 is sufficient to significantly increase the salience of the efficient equilibrium with respect to the absence of advice. Furthermore, and contrary to our hypothesis, mutual knowledge of level 2 induces, under suitable conditions, successful coordination more frequently than common knowledge. (shrink)
In a recent paper McCain (2012) argues that weak predictivism creates an important challenge for external world scepticism. McCain regards weak predictivism as uncontroversial and assumes the thesis within his argument. There is a sense in which the predictivist literature supports his conviction that weak predictivism is uncontroversial. This absence of controversy, however, is a product of significant plasticity within the thesis, which renders McCain’s argument worryingly vague. For McCain’s argument to work he either needs a stronger (...) version of weak predictivism than has been defended within the literature, or must commit to a more precise formulation of the thesis and argue that weak predictivism, so understood, creates the challenge to scepticism that he hopes to achieve. The difficulty with the former is that weak predictivism is not uncontroversial in the respect that McCain’s argument would require. I consider the prospects of saving McCain’s argument by committing to a particular version of weak predictivism, but find them unpromising for several reasons. (shrink)
This paper distinguished different analytical approaches to the evaluation of the sustainability of large-scale land acquisitions—at both the conceptual and methodological levels. First, at the conceptual level, evaluation of the sustainability of land acquisitions depends on what definition of sustainability is adopted—strong or weak sustainability. Second, a lack of comparative empirical methods in many studies has limited the identification of causal factors affecting sustainability. An empirical investigation into the sustainability of land acquisitions in Tanzania that employs these existing concepts (...) in a methodologically rigorous manner offers an opportunity to more clearly addresses ethical questions surrounding international land acquisitions. My findings indicate that it should not be assumed that sustainability necessarily hinges on issues of strong sustainability, particularly that all village lands represent critical natural capital. As a result of its unique history of Ujamaa villagization, Tanzania villages often have ownership of significant tracts of unused land that mitigates the risk of violating conditions of strong sustainability. Issues of weak sustainability appear to be more important to villagers—particularly the degree of man-made capital benefits derived from projects. While compensation rates for lands acquired were low and the process lacked transparency, low compensation rates are not sufficient grounds for rejecting land acquisitions as unsustainable. When projects deliver significant man-made capital benefits, low compensation rates were not a politically salient issue amongst villagers. Finally, results suggest that some prioritization of man-made capital over biodiversity can be ethically defensible when the decision-making process goes through legitimate village government bodies and benefits reach poor villagers. (shrink)
In this chapter I review empirical studies directly testing the hypotheses of my 1973 paper "The Strength of Weak Ties" (hereafter "SWT") and work that elaborates those hypotheses theoretically or uses them to suggest new empirical research not discussed in my original formulation. Along the way, I will reconsider various aspects of the theoretical argument, attempt to plug some holes, and broaden its base.
Oaksford and Chater (1994) proposed to analyse the Wason selection task as an inductive instead of a deductive task. Applying Bayesian statistics, they concluded that the cards that participants tend to select are those with the highest expected information gain. Therefore, their choices seem rational from the perspective of optimal data selection. We tested a central prediction from the theory in three experiments: card selection frequencies should be sensitive to the subjective probability of occurrence for individual cards. In Experiment 1, (...) expected frequencies of the p- and the q-card were manipulated independently by concepts referring to large vs. small sets. Although the manipulation had an effect on card selection frequencies, there was only a weak correlation between the predicted and the observed patterns. In the second experiment, relative frequencies of individual cards were manipulated more directly by explicit frequency information. In addition, participants estimated probabilities for the four logical cases and of the conditional statement itself. The experimental manipulations strongly affected the probability estimates, but were completely unrelated to card selections. This result was replicated in a third experiment. We conclude that our data provide little support for optimal data selection theory. (shrink)
Newell (1980; 1990) proposed that cognitive theories be developed in an effort to satisfy multiple criteria and to avoid theoretical myopia. He provided two overlapping lists of 13 criteria that the human cognitive architecture would have to satisfy in order to be functional. We have distilled these into 12 criteria: flexible behavior, real-time performance, adaptive behavior, vast knowledge base, dynamic behavior, knowledge integration, natural language, learning, development, evolution, and brain realization. There would be greater theoretical progress if we evaluated theories (...) by a broad set of criteria such as these and attended to the weaknesses such evaluations revealed. To illustrate how theories can be evaluated we apply these criteria to both classical connectionism (McClelland & Rumelhart 1986; Rumelhart & McClelland 1986b) and the ACT-R theory (Anderson & Lebiere 1998). The strengths of classical connectionism on this test derive from its intense effort in addressing empirical phenomena in such domains as language and cognitive development. Its weaknesses derive from its failure to acknowledge a symbolic level to thought. In contrast, ACT-R includes both symbolic and subsymbolic components. The strengths of the ACT-R theory derive from its tight integration of the symbolic component with the subsymbolic component. Its weaknesses largely derive from its failure, as yet, to adequately engage in intensive analyses of issues related to certain criteria on Newell's list. Key Words: cognitive architecture; connectionism; hybrid systems; language; learning; symbolic systems. (shrink)
This paper tests a novel implication of the original version of prospect theory (Kahneman and Tversky, 1979): that choices may systematically violate transitivity. Some have interpreted this implication as a weakness, viewing it as an anomaly generated by the âediting phaseâ of prospect theory which can be rendered redundant by an appropriate re-specification of the preference function. Although there is some existing evidence that transitivity fails descriptively, the particular form of non-transitivity implied by prospect theory is quite distinctive and hence (...) presents an ideal opportunity to expose that theory to test. An experiment is reported which reveals strong evidence of the predicted intransitivity. It is argued that the existence of this new form of non-transitive behaviour presents a fresh theoretical challenge to those seeking descriptively adequate theories of choice behaviour, and a particular challenge to those who seek explanations within the conventional economic paradigm of utility maximisation. (shrink)
This article assays Paul Ramsey's influential attempt to conceive possible nuclear deterrents within the confines of just war tenets. I look first at Ramsey's construction of just war ideas according to a protection paradigm, one in which agape is deontically defined. I also note a subtle sub-theme in Ramsey's construction of just war ideas, what I call a preservation motif. I then assess Ramsey's discussion of nuclear deterrence, closing with a critique of his treatments of intention (...) and proportionality. I conclude by arguing that Ramsey's argument falters, and that the weaknesses of his argument can be rendered intelligible by noting how the full implications of the protection paradigm are attenuated by the preservation motif. (shrink)
While fairness is often mentioned as a determinant of ultimatum bargaining behavior, few data sets are available that can test theories that incorporate fairness considerations. This paper tests the reciprocal kindness theory in Rabin (1993 Incorporating fairness into game theory and economics, The American Economic Review 83: 1281-1302) as an application to the one-period ultimatum bargaining game. We report on data from 100 ultimatum games that vary the financial stakes of the game from 1 to 15. Responder behavior is (...) strongly in support of the kindness theory and proposer behavior weakly in support of it. Offer percentages and past offers influence behavior the most, whereas the size of the pie has a marginally significant effect on offer percentages. The data is more in support of reciprocal kindness than alternative theories of equal-split or learning behavior, although the data also weakly support a minimum percentage threshold hypothesis. As a whole, our results together with existing studies suggest that, for smaller stakes games, fairness considerations dominate monetary considerations. This has implications for more complicated naturally occurring bargaining environments in which the financial stakes can vary widely. (shrink)
Recent discussions of experimental tests of the Sum Rule have been carried out in the context of the special circumstances attending the Cross-Ramsey experiment. A more general analysis of possible tests is presented. A technical mistake of Fine and Glymour concerned with a misunderstanding of the physics of the Cross-Ramsey experiment is explained and a detailed analysis of a thought experiment based on the Einstein-Podolsky-Rosen wave function is given. It is concluded, in agreement with Fine, that scattering experiments (...) do not test the Sum Rule as a principle which supplements standard quantum mechanics. (shrink)
We investigate the research programme of dynamic doxastic logic (DDL) and analyze its underlying methodology. The Ramseytest for conditionals is used to characterize the logical and philosophical differences between two paradigmatic systems, AGM and KGM, which we develop and compare axiomatically and semantically. The importance of Gärdenfors’s impossibility result on the Ramseytest is highlighted by a comparison with Arrow’s impossibility result on social choice. We end with an outlook on the prospects and the future (...) of DDL. (shrink)
I defend a formulation of the RamseyTest with a condition for accepting negations of conditionals. It is implicit in the assumptions of the triviality theorems of Gärdenfors, Harper, and Lewis; and it allows for a unified proof of those theorems, from weaker assumptions about belief revision. This leads to a proof of McGee’s thesis that iterated conditionals do not obey modus ponens. †To contact the author, please write to: Institute of Philosophy, University of Leuven, Kardinaal Mercierplein 2, (...) B‐3000 Leuven, Belgium; e‐mail: email@example.com. (shrink)
The paper presents a non-monotonic inference relation on a language containing a conditional that satisfies the RamseyTest. The logic is a weakening of classical logic and preserves many of the ‘paradoxes of implication’ associated with the material implication. It is argued, however, that once one makes the proper distinction between supposing that something is the case and accepting that it is the case, these ‘paradoxes’ cease to be counterintuitive. A representation theorem is provided where conditionals are given (...) a non-bivalent semantics and epistemic states are represented via preferential models. (shrink)
The two main psychological theories of the ordinary conditional were designed to account for inferences made from assumptions, but few premises in everyday life can be simply assumed true. Useful premises usually have a probability that is less than certainty. But what is the probability of the ordinary conditional and how is it determined? We argue that people use a two stage Ramseytest that we specify to make probability judgements about indicative conditionals in natural language, and we (...) describe experiments that support this conclusion. Our account can explain why most people give the conditional probability as the probability of the conditional, but also why some give the conjunctive probability. We discuss how our psychological work is related to the analysis of ordinary indicative conditionals in philosophical logic. (shrink)
I formulate a counterfactual version of the notorious ‘RamseyTest’. Whereas the RamseyTest for indicative conditionals links credence in indicatives to conditional credences, the counterfactual version links credence in counterfactuals to expected conditional chance. I outline two forms: a Ramsey Identity on which the probability of the conditional should be identical to the corresponding conditional probability/expectation of chance; and a Ramsey Bound on which credence in the conditional should never exceed the latter. Even (...) in the weaker, bound, form, the counterfactual RamseyTest makes counterfactuals subject to the very argument that Lewis used to argue against the indicative version of the RamseyTest. I compare the assumptions needed to run each, pointing to assumptions about the time-evolution of chances that can replace the appeal to Bayesian assumptions about credence update in motivating the assumptions of the argument. I ﬁnish by outlining two reactions to the discussion: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives. (shrink)
Two major themes in the literature on indicative conditionals are (1) that the content of indicative conditionals typically depends on what is known;1 (2) that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity by exploiting such equivalences as: A⇔¬A (...) A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are (in some relevant sense) epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker (1970) suggested, as a way of articulating the ‘RamseyTest’, the following very general schema for indicative conditionals relative to some probability function P: P(A→B) = P(B|A) 1For example, Nolan (2003); Weatherson (2001); Gillies (2007). 2For example Stalnaker (1970); McGee (1989); Adams (1975). 3Lewis (1973). See Nolan (1997) for criticism. 4‘epistemically possible’ here means incompatible with what is known (where ‘what is known’ is to be cashed out in some relevant sense). 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’.. (shrink)
I formulate a counterfactual version of the notorious 'RamseyTest'. Whereas the RamseyTest for indicative conditionals links credence in indicatives to conditional credences, the counterfactual version links credence in counterfactuals to expected conditional chance. I outline two forms: a Ramsey Identity on which the probability of the conditional should be identical to the corresponding conditional probabihty/expectation of chance; and a Ramsey Bound on which credence in the conditional should never exceed the latter.Even in (...) the weaker, bound, form, the counterfactual RamseyTest makes counterfactuals subject to the very argument that Lewis used to argue against the indicative version of the RamseyTest. I compare the assumptions needed to run each, pointing to assumptions about the time-evolution of chances that can replace the appeal to Bayesian assumptions about credence update in motivating the assumptions of the argument.I finish by outlining two reactions to the discussion: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives. (shrink)
Conditionals that contain a modality in the consequent give rise to a particular semantic phenomenon whereby the antecedent of the conditional blocks possibilities when interpreting the modality in the consequent. This explains the puzzling logical behaviour of constructions like "If you don't buy a lottery ticket, you can't win", "If you eat that poison, it is unlikely that you will survive the day" and "If you kill Harry, you ought to kill him gently". In this paper it is argued that (...) a semantic version of the RamseyTest provides a key in the analysis of such constructions. The logic for this semantics is axiomatized and some examples are studied, among them a well-known puzzle for contrary-to-duty obligations. (shrink)
In this paper, I discuss conditionals as illocutionary speech acts whose interpretation depends upon the whole of the social context in which they are uttered and whose purpose is to affect the opinions and actions of others. I argue for a suppositional approach to conditional statements based in what philosophers call the Ramseytest and developing the psychological theory that conditionals elicit a process of hypothetical thinking in their listeners. By reference to the experimental psychological literature on conditionals, (...) I show that in general conditionals, even ones that are basic or abstract in nature, are not treated as truth-functional or material by ordinary people. Drawing upon the suppositional nature of conditionals and the influence of pragmatic implicature, I discuss uses of conditionals as advice, inducement, persuasions and dissuasion, arguing that speakers use conditionals to try to influence the beliefs and actions of their listeners by shaping their hypothetical thought about possibilities. (shrink)
This book looks at the ways in which conditionals, an integral part of philosophy and logic, can be of practical use in computer programming. It analyzes the different types of conditionals, including their applications and potential problems. Other topics include defeasible logics, the Ramseytest, and a unified view of consequence relation and belief revision. Its implications will be of interest to researchers in logic, philosophy, and computer science, particularly artificial intelligence.