The new theory of reference has won popularity. However, a number of noted philosophers have also attempted to reply to the critical arguments of Kripke and others, and aimed to vindicate the description theory of reference. Such responses are often based on ingenious novel kinds of descriptions, such as rigidified descriptions, causal descriptions, and metalinguistic descriptions. This prolonged debate raises the doubt whether different parties really have any shared understanding of what the central question of the philosophical theory of reference (...) is: what is the main question to which descriptivism and the causal-historical theory have presented competing answers. One aim of the paper is to clarify this issue. The most influential objections to the new theory of reference are critically reviewed. Special attention is also paid to certain important later advances in the new theory of reference, due to Devitt and others. (shrink)
The issue of downward causation (and mental causation in particular), and the exclusion problem is discussed by taking into account some recent advances in the philosophy of science. The problem is viewed from the perspective of the new interventionist theory of causation developed by Woodward. It is argued that from this viewpoint, a higher-level (e.g., mental) state can sometimes truly be causally relevant, and moreover, that the underlying physical state which realizes it may fail to be such.
The rather unrestrained use of second-order logic in the neo-logicist program is critically examined. It is argued in some detail that it brings with it genuine set-theoretical existence assumptions and that the mathematical power that Hume’s Principle seems to provide, in the derivation of Frege’s Theorem, comes largely from the ‘logic’ assumed rather than from Hume’s Principle. It is shown that Hume’s Principle is in reality not stronger than the very weak Robinson Arithmetic Q. Consequently, only a few rudimentary facts (...) of arithmetic are logically derivable from Hume’s Principle. And that hardly counts as a vindication of logicism. (shrink)
The new externalist picture of natural kind terms due to Kripke, Putnam, and others has become quite popular in philosophy. Many philosophers of science have remained sceptical. Häggqvist and Wikforss have recently criticised this view severely. They contend it depends essentially on a micro-essentialist view of natural kinds that is widely rejected among philosophers of science, and that a scientifically reasonable metaphysics entails the resurrection of some version of descriptivism. It is argued in this paper that the situation is not (...) quite as dark for the new theory of reference as many critics suggest. There are several distinct questions here which should not be conflated and ought to be dealt with one by one. Descriptivism remains arguably problematic. (shrink)
In the theory of meaning, it is common to contrast truth-conditional theories of meaning with theories which identify the meaning of an expression with its use. One rather exact version of the somewhat vague use-theoretic picture is the view that the standard rules of inference determine the meanings of logical constants. Often this idea also functions as a paradigm for more general use-theoretic approaches to meaning. In particular, the idea plays a key role in the anti-realist program of Dummett and (...) his followers. In the theory of truth, a key distinction now is made between substantial theories and minimalist or deflationist views. According to the former, truth is a genuine substantial property of the truth-bearers, whereas according to the latter, truth does not have any deeper essence, but all that can be said about truth is contained in T-sentences (sentences having the form: ‘P’ is true if and only if P). There is no necessary analytic connection between the above theories of meaning and truth, but they have nevertheless some connections. Realists often favour some kind of truth-conditional theory of meaning and a substantial theory of truth (in particular, the correspondence theory). Minimalists and deflationists on truth characteristically advocate the use theory of meaning (e.g. Horwich). Semantical anti-realism (e.g. Dummett, Prawitz) forms an interesting middle case: its starting point is the use theory of meaning, but it usually accepts a substantial view on truth, namely that truth is to be equated with verifiability or warranted assertability. When truth is so understood, it is also possible to accept the idea that meaning is closely related to truth-conditions, and hence the conflict between use theories and truth-conditional theories in a sense disappears in this view. (shrink)
Intuitionism’s disagreement with classical logic is standardly based on its specific understanding of truth. But different intuitionists have actually explicated the notion of truth in fundamentally different ways. These are considered systematically and separately, and evaluated critically. It is argued that each account faces difficult problems. They all either have implausible consequences or are viciously circular.
Jaegwon Kim’s views on mental causation and the exclusion argument are evaluated systematically. Particular attention is paid to different theories of causation. It is argued that the exclusion argument and its premises do not cohere well with any systematic view of causation.
Gödel's two incompleteness theorems are among the most important results in modern logic, and have deep implications for various issues. They concern the limits of provability in formal axiomatic theories. The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F. According to the second incompleteness theorem, such a formal system cannot (...) prove that the system itself is consistent (assuming it is indeed consistent). These results have had a great impact on the philosophy of mathematics and logic. There have been attempts to apply the results also in other areas of philosophy such as the philosophy of mind, but these attempted applications are more controversial. The present entry surveys the two incompleteness theorems and various issues surrounding them. (shrink)
The minimalist view of truth endorsed by Paul Horwich denies that truth has any underlying nature. According to minimalism, the truth predicate ‘exists solely for the sake of a certain logical need’; ‘the function of the truth predicate is to enable the explicit formulation of schematic generalizations’. Horwich proposes that all there really is to truth follows from the equivalence schema: The proposition that p is true iff p, or, using Horwich’s notation, ·pÒ is true ´ p. The (unproblematic) instances (...) of the schema form ‘the minimal theory of truth’. Horwich claims that all the facts involving truth can be explained on the basis of the minimal theory. However, it has been pointed out, e.g. by Gupta (1993), that the minimal theory is too weak to entail any general facts about truth, e.g. the fact that.. (shrink)
An argument, different from the Newman objection, against the view that the cognitive content of a theory is exhausted by its Ramsey sentence is reviewed. The crux of the argument is that Ramsification may ruin inductive systematization between theory and observation. The argument also has some implications concerning the issue of underdetermination.
The problem of mental causation is discussed by taking into account some recent developments in the philosophy of science. The problem is viewed from the perspective of the new interventionist theory of causation developed by Woodward. The import of the idea that causal claims involve contrastive classes in mental causation is also discussed. It is argued that mental causation is much less a problem than it has appeared to be.
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...) the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
Hilary Putnam's famous arguments criticizing Tarski's theory of truth are evaluated. It is argued that they do not succeed to undermine Tarski's approach. One of the arguments is based on the problematic idea of a false instance of T-schema. The other ignores various issues essential for Tarski's setting such as language-relativity of truth definition.
In the early 20th century, scepticism was common among philosophers about the very meaningfulness of the notion of truth – and of the related notions of denotation, definition etc. (i.e., what Tarski called semantical concepts). Awareness was growing of the various logical paradoxes and anomalies arising from these concepts. In addition, more philosophical reasons were being given for this aversion.1 The atmosphere changed dramatically with Alfred Tarski’s path-breaking contribution. What Tarski did was to show that, assuming that the syntax of (...) the object language is specified exactly enough, and that the metatheory has a certain amount of set theoretic power,2 one can explicitly define truth in the object language. And what can be explicitly defined can be eliminated. It follows that the defined concept cannot give rise to any inconsistencies (that is, paradoxes). This gave new respectability to the concept of truth and related notions. Nevertheless, philosophers’ judgements on the nature and philosophical relevance of Tarski’s work have varied. It is my aim here to review and evaluate some threads in this debate. (shrink)
For example, Cheryl Misak in her book-length examination of verificationism writes that ‘the holist [such as Quine] need not reject verificationism, if it is suitably formulated. Indeed, Quine often describes himself as a verificationist’.[iii] Misak concludes that Quine ‘can be described as a verificationist who thinks that the unit of meaning is large’;[iv] and when comparing Dummett and Quine, Misak states that ‘both can be, and in fact are, verificationists’.[v].
Philosopher’s judgements on the philosophical value of Tarski’s contributions to the theory of truth have varied. For example Karl Popper, Rudolf Carnap, and Donald Davidson have, in their different ways, celebrated Tarski’s achievements and have been enthusiastic about their philosophical relevance. Hilary Putnam, on the other hand, pronounces that “[a]s a philosophical account of truth, Tarski’s theory fails as badly as it is possible for an account to fail.” Putnam has several alleged reasons for his dissatisfaction,1 but one of them, (...) the one I call the modal objection (cf. Raatikainen 2003), has been particularly influential. In fact, very similar objections have been presented over and over again in the literature. Already in 1954, Arthur Pap had criticized Tarski’s account with a similar argument (Pap 1954). Moreover, both Scott Soames (1984) and John Etchemendy (1988) use, with an explicit reference to Putnam, similar modal arguments in relation to Tarski. Richard Heck (1997), too, shows some sympathy for such considerations. Simon Blackburn (1984, Ch. 8) has put forward a related argument against Tarski. Recently, Marian David has criticized Tarski’s truth definition with an analogous argument as well (David 2004, p. 389-390).2 This line of argument is thus apparently one of the most influential critiques of Tarski. It is certainly worthy of serious attention. Nevertheless, I shall argue that, given closer scrutiny, it does not present such an acute problem for the Tarskian approach to truth as many philosophers think. But I also believe that it is important to understand clearly why this is so. Moreover, I think that a careful consideration of the issue illuminates certain important but somewhat neglected aspects of the Tarskian approach. (shrink)
The essay examines the views expressed in von Wright's Explanation and Understanding (1971) on human action and historical events from the perspective of the recent philosophy of science. Connecting causal explanation tightly to covering laws, as von Wright does, is found to be problematic, and his Logical connection argument invalid. On the other hand, von Wright's sketched theory of causation which is based on the concept of manipulation proves to be on the right track in light of current knowledge. From (...) this perspective, however, there is no obstacle for explaining human action causally. This is illustrated with two examples from historical research. Finally, von Wright's idea that a complete account of the historical past is never achieved, because the past can always be re-evaluated, is briefly discussed. (shrink)
Quine’s thesis of the indeterminacy of translation has puzzled the philosophical community for several decades. It is unquestionably among the best known and most disputed theses in contemporary philosophy. Quine’s classical argument for the indeterminacy thesis, in his seminal work Word and Object, has even been described by Putnam as “what may well be the most fascinating and the most discussed philosophical argument since Kant’s Transcendental Deduction of the Categories” (Putnam, 1975a: p. 159).
Here the relationship between understanding and knowledge of meaning is discussed from two different perspectives: that of Dummettian semantic anti-realism and that of the semantic externalism of Putnam and others. The question addressed is whether or not the truth of semantic externalism would undermine a central premise in one of Dummetts key arguments for anti-realism, insofar as Dummetts premise involves an assumption about the transparency of meaning and semantic externalism is often taken to undermine such transparency. Several notions of transparency (...) and conveyability of meaning are distinguished and it is argued that, though the Dummettian argument for anti-realism presupposes only a weak connection between knowledge of meaning and understanding, even this much is not trivially true in light of semantic externalism, and that semantic externalism, if true, would thus represent a reason for rejecting the crucial assumption on which the Dummettian argument depends. (shrink)
It has sometimes been suggested that the so-called new theory of reference (NTR) would provide an alternative picture of meaning and reference which avoids the unwelcome consequences of the meaning-variance thesis and incommesurability. However, numerous philosophers of science have been quite critical towards the idea and NTR in general. It is argued that many of them have an over-simplified and, in part, mistaken understanding of what NTR amounts to. It is submitted that NTR, when correctly understood, can be an important (...) ingredient in the realist toolkit for defending the rationality of science. (shrink)
The prospects and limitations of defining truth in a finite model in the same language whose truth one is considering are thoroughly examined. It is shown that in contradistinction to Tarski's undefinability theorem for arithmetic, it is in a definite sense possible in this case to define truth in the very language whose truth is in question.
After sketching the main lines of Hilbert's program, certain well-known and influential interpretations of the program are critically evaluated, and an alternative interpretation is presented. Finally, some recent developments in logic related to Hilbert's program are reviewed.
The issue of whether science is, or can be, value-free has been debated for more than a century. The idea of value-free science is of course as old as science itself, and so are the arguments against this idea. Plato defended it..
Luonnontieteiden kiistattoman menestyksen sekä kartesiolaisen mieli-ruumis -dualismin ylipääsemättömiksi osoittautuneiden sisäisten ongelmien seurauksena mielenfilosofiassa ovat viime vuosikymmeninä olleet vallalla erilaiset materialistiset tai fysikalistiset opit. Niiden mukaan mentaaliset ilmiöt ovat joko identtisiä tiettyjen fysikaalisten ilmiöiden kanssa tai ainakin täysin riippuvaisia sellaisista – siis fysikaalisten asioiden määräämiä.
The key argument of Hilary Putnam for conceptual relativism, his so-called mereological argument, is critically evaluated. It is argued that Putnam’s reasoning is based on confusion between languages and theories.
Lucas and Redhead () announce that they will defend the views of Redhead () against the argument by Panu Raatikainen (). They certainly re-state the main claims of Redhead (), but they do not give any real arguments in their favour, and do not provide anything that would save Redhead’s argument from the serious problems pointed out in (Raatikainen ). Instead, Lucas and Redhead make a number of seemingly irrelevant points, perhaps indicating a failure to understand the logico-mathematical points at (...) issue. (shrink)
\Complexity" is a catchword of certain extremely popular and rapidly developing interdisciplinary new sciences, often called accordingly the sciences of complexity1. It is often closely associated with another notably popular but ambiguous word, \information" information, in turn, may be justly called the central new concept in the whole 20th century science. Moreover, the notion of information is regularly coupled with a key concept of thermodynamics, viz. entropy. And like this was not enough, it is quite usual to add one more, (...) at present extraordinarily popular notion, namely chaos, and wed it with the above-mentioned concepts. (shrink)
Artikkelissa tarkastellaan perusteellisesti ja kriittisesti David Chalmersin vaikutusvaltaista fenomenaaliseen tietoisuuden liittyvää argumenttia materialismia vastaan. Argumentissa tunnistetaan useampikin kuin yksi heikko lenkki.
In his recent article Christopher Gauker (2001) has presented a thoughtprovoking argument against deﬂationist theories of truth. More exactly, he attacks what he calls ‘T-schema deﬂationism’, that is, the claim that a theory of truth can simply take the form of certain instances of the T-schema.
"Explanation and Understanding" (1971) by Georg Henrik von Wright is a modern classic in analytic hermeneutics, and in the philosophy of the social sciences and humanities in general. In this work, von Wright argues against naturalism, or methodological monism, i.e. the idea that both the natural sciences and the social sciences follow broadly the same general scientific approach and aim to achieve causal explanations. Against this view, von Wright contends that the social sciences are qualitatively different from the natural sciences: (...) according to his view, the natural sciences aim at causal explanations, whereas the purpose of the social sciences is to understand their subjects. In support of this conviction, von Wright also puts forward a version of the so-called logical connection argument. -/- Von Wright views scientific explanation along the lines of the traditional covering law model. He suggests that the social sciences, in contrast, utilize what he calls “practical syllogism” in understanding human actions. In addition, von Wright presents in this work an original picture on causation: a version of the manipulability theory of causation. -/- In the four decades following von Wright’s classic work, the overall picture in in the philosophy of science has changed significantly, and much progress has been made in various fronts. The aim of the contribution is to revisit the central ideas of "Explanation and Understanding" and evaluate them from this perspective. The covering law model of explanation and the regularity theory of causation behind it have since then fallen into disfavor, and virtually no one believes that causal explanations even in the natural sciences comply with the covering law model. No wonder then that covering law explanations are not found in the social sciences either. Ironically, the most popular theory of causal explanation in the philosophy of science nowadays is the interventionist theory, which is a descendant of the manipulability theory of von Wright and others. However, this theory can be applied with no special difficulties in both the natural sciences and the social sciences. -/- Von Wright’s logical connection argument and his ideas concerning practical syllogisms are also critically assessed. It is argued that in closer scrutiny, they do not pose serious problems for the view that the social sciences too provide causal explanations. In sum, von Wright’s arguments against naturalism do not appear, in today’s perspective, particularly convincing. (shrink)
Viime vuosina ihmistieteiden kentässä on saanut osakseen paljon huomiota uusi lähestymistapa, jota kutsutaan ”evoluutiopsykologiaksi”. Sen piiristä on esimerkiksi väitetty, että evoluutio on muokannut meidän parinvalintamieltymyksiämme niin, että miehillä on taipumus tuntea vetoa lisääntymiskykyisiltä näyttäviin nuoriin naisiin, pyrkiä parittelemaan aina tilaisuuden tullen mahdollisimman monien naisten kanssa ja olla mustasukkaisia, kun taas naiset ovat taipuvaisia mieltymään iäkkäämpiin miehiin, joilla on valtaa ja resursseja. Luonnonvalinnalla on pyritty myös selittämään mm. raiskauksia.
Nykyaikaisen formaalisen logiikan kehityksen alkutaivalta leimasivat kunnianhimoiset tavoitteet ja vahva optimismi. Tavoitteena oli osoittaa koko matematiikalle ehdottoman varma perusta. Parhaimmillaan filosofian ongelmienkin uskottiin ratkeavan formaalisen logiikan avulla. Filosofisesti kuitenkin on mielenkiintoisinta, että nykyaikainen formaalinen logiikka on mahdollistanut formaalisen lähestymistavan rajoitusten kiistattoman ja matemaattisen täsmällisen osoittamisen. Tämäntyyppisillä tuloksilla on monia tärkeitä filosofisia seurauksia. Esittelen tässä artikkelissa tiettyjä keskeisiä tällaisia logiikan rajoittavia tuloksia.
The most popular and influential strategies used against semantic externalism and the causal theory of reference are critically examined. It is argued that upon closer scrutiny, none of them emerges as truly convincing.