The issue of downward causation (and mental causation in particular), and the exclusion problem is discussed by taking into account some recent advances in the philosophy of science. The problem is viewed from the perspective of the new interventionist theory of causation developed by Woodward. It is argued that from this viewpoint, a higher-level (e.g., mental) state can sometimes truly be causally relevant, and moreover, that the underlying physical state which realizes it may fail to be such.
In the theory of meaning, it is common to contrast truth-conditional theories of meaning with theories which identify the meaning of an expression with its use. One rather exact version of the somewhat vague use-theoretic picture is the view that the standard rules of inference determine the meanings of logical constants. Often this idea also functions as a paradigm for more general use-theoretic approaches to meaning. In particular, the idea plays a key role in the anti-realist program of Dummett and (...) his followers. In the theory of truth, a key distinction now is made between substantial theories and minimalist or deflationist views. According to the former, truth is a genuine substantial property of the truth-bearers, whereas according to the latter, truth does not have any deeper essence, but all that can be said about truth is contained in T-sentences (sentences having the form: ‘P’ is true if and only if P). There is no necessary analytic connection between the above theories of meaning and truth, but they have nevertheless some connections. Realists often favour some kind of truth-conditional theory of meaning and a substantial theory of truth (in particular, the correspondence theory). Minimalists and deflationists on truth characteristically advocate the use theory of meaning (e.g. Horwich). Semantical anti-realism (e.g. Dummett, Prawitz) forms an interesting middle case: its starting point is the use theory of meaning, but it usually accepts a substantial view on truth, namely that truth is to be equated with verifiability or warranted assertability. When truth is so understood, it is also possible to accept the idea that meaning is closely related to truth-conditions, and hence the conflict between use theories and truth-conditional theories in a sense disappears in this view. (shrink)
After sketching the main lines of Hilbert's program, certain well-known and influential interpretations of the program are critically evaluated, and an alternative interpretation is presented. Finally, some recent developments in logic related to Hilbert's program are reviewed.
The rather unrestrained use of second-order logic in the neo-logicist program is critically examined. It is argued in some detail that it brings with it genuine set-theoretical existence assumptions, and that the mathematical power that Hume’s Principle seems to provide, in the derivation of Frege’s Theorem, comes largely from the “logic” assumed rather than from Hume’s principle. It is shown that Hume’s principle is in reality not stronger than the very weak Robinson Arithmetic Q. Consequently, only few rudimentary facts of (...) arithmetic are logically derivable from Hume’s principle. And that hardly counts as a vindication of logicism. (shrink)
The minimalist view of truth endorsed by Paul Horwich denies that truth has any underlying nature. According to minimalism, the truth predicate ‘exists solely for the sake of a certain logical need’; ‘the function of the truth predicate is to enable the explicit formulation of schematic generalizations’. Horwich proposes that all there really is to truth follows from the equivalence schema: The proposition that p is true iff p, or, using Horwich’s notation, ·pÒ is true ´ p. The (unproblematic) instances (...) of the schema form ‘the minimal theory of truth’. Horwich claims that all the facts involving truth can be explained on the basis of the minimal theory. However, it has been pointed out, e.g. by Gupta (1993), that the minimal theory is too weak to entail any general facts about truth, e.g. the fact that.. (shrink)
Three influential forms of realism are distinguished and interrelated: realism about the external world, construed as a metaphysical doctrine; scientific realism about non-observable entities postulated in science; and semantic realism as defined by Dummett. Metaphysical realism about everyday physical objects is contrasted with idealism and phenomenalism, and several potent arguments against these latter views are reviewed. -/- Three forms of scientific realism are then distinguished: (i) scientific theories and their existence postulates should be taken literally; (ii) the existence of unobservable (...) entities posited by our most successful scientific theories is justified scientifically; and (iii) our best current scientific theories are at least approximately true. It is argued that only some form of scientific realism can make proper sense of certain episodes in the history of science. -/- Finally, Dummett’s influential formulation of semantic issues about realism considered. Dummett argued that in some cases, the fundamental issue is not about the existence of entities, but rather about whether statements of some specified class (such as mathematics) have an objective truth value, independently of our means of knowing it. Dummett famously argued against such semantic realism and in favor of anti-realism. The relation of semantic realism to the metaphysical construal of realism and Dummett’s main argument against semantic realism is examined. (shrink)
Intuitionism’s disagreement with classical logic is standardly based on its specific understanding of truth. But different intuitionists have actually explicated the notion of truth in fundamentally different ways. These are considered systematically and separately, and evaluated critically. It is argued that each account faces difficult problems. They all either have implausible consequences or are viciously circular.
An argument, different from the Newman objection, against the view that the cognitive content of a theory is exhausted by its Ramsey sentence is reviewed. The crux of the argument is that Ramsification may ruin inductive systematization between theory and observation. The argument also has some implications concerning the issue of underdetermination.
The problem of mental causation is discussed by taking into account some recent developments in the philosophy of science. The problem is viewed from the perspective of the new interventionist theory of causation developed by Woodward. The import of the idea that causal claims involve contrastive classes in mental causation is also discussed. It is argued that mental causation is much less a problem than it has appeared to be.
"Explanation and Understanding" (1971) by Georg Henrik von Wright is a modern classic in analytic hermeneutics, and in the philosophy of the social sciences and humanities in general. In this work, von Wright argues against naturalism, or methodological monism, i.e. the idea that both the natural sciences and the social sciences follow broadly the same general scientific approach and aim to achieve causal explanations. Against this view, von Wright contends that the social sciences are qualitatively different from the natural sciences: (...) according to his view, the natural sciences aim at causal explanations, whereas the purpose of the social sciences is to understand their subjects. In support of this conviction, von Wright also puts forward a version of the so-called logical connection argument. -/- Von Wright views scientific explanation along the lines of the traditional covering law model. He suggests that the social sciences, in contrast, utilize what he calls “practical syllogism” in understanding human actions. In addition, von Wright presents in this work an original picture on causation: a version of the manipulability theory of causation. -/- In the four decades following von Wright’s classic work, the overall picture in in the philosophy of science has changed significantly, and much progress has been made in various fronts. The aim of the contribution is to revisit the central ideas of "Explanation and Understanding" and evaluate them from this perspective. The covering law model of explanation and the regularity theory of causation behind it have since then fallen into disfavor, and virtually no one believes that causal explanations even in the natural sciences comply with the covering law model. No wonder then that covering law explanations are not found in the social sciences either. Ironically, the most popular theory of causal explanation in the philosophy of science nowadays is the interventionist theory, which is a descendant of the manipulability theory of von Wright and others. However, this theory can be applied with no special difficulties in both the natural sciences and the social sciences. -/- Von Wright’s logical connection argument and his ideas concerning practical syllogisms are also critically assessed. It is argued that in closer scrutiny, they do not pose serious problems for the view that the social sciences too provide causal explanations. In sum, von Wright’s arguments against naturalism do not appear, in today’s perspective, particularly convincing. (shrink)
Quine’s thesis of the indeterminacy of translation has puzzled the philosophical community for several decades. It is unquestionably among the best known and most disputed theses in contemporary philosophy. Quine’s classical argument for the indeterminacy thesis, in his seminal work Word and Object, has even been described by Putnam as “what may well be the most fascinating and the most discussed philosophical argument since Kant’s Transcendental Deduction of the Categories” (Putnam, 1975a: p. 159).
Viime vuosina ihmistieteiden kentässä on saanut osakseen paljon huomiota uusi lähestymistapa, jota kutsutaan ”evoluutiopsykologiaksi”. Sen piiristä on esimerkiksi väitetty, että evoluutio on muokannut meidän parinvalintamieltymyksiämme niin, että miehillä on taipumus tuntea vetoa lisääntymiskykyisiltä näyttäviin nuoriin naisiin, pyrkiä parittelemaan aina tilaisuuden tullen mahdollisimman monien naisten kanssa ja olla mustasukkaisia, kun taas naiset ovat taipuvaisia mieltymään iäkkäämpiin miehiin, joilla on valtaa ja resursseja. Luonnonvalinnalla on pyritty myös selittämään mm. raiskauksia.
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...) the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
Hilary Putnam's famous arguments criticizing Tarski's theory of truth are evaluated. It is argued that they do not succeed to undermine Tarski's approach. One of the arguments is based on the problematic idea of a false instance of T-schema. The other ignores various issues essential for Tarski's setting such as language-relativity of truth definition.
Yhteiskuntatieteiden ja humanististen tieteiden, tai lyhyemmin, ihmistieteiden, asema tieteiden joukossa on monien kiistojen kohteena. Ihmistieteiden ja luonnontieteiden välistä suhdetta koskevassa keskustelussa on perinteisesti ollut vastakkain kaksi kantaa: Toinen näkökanta on painottanut, että sama yleinen tieteellinen menetelmä soveltuu niin luontoon kuin ihmiseenkin ja että ollakseen tieteellisiä ihmistieteiden on täytettävä samat tieteellisyyden kriteerit kuin luonnontieteidenkin. Toinen on korostanut ihmistieteiden olemuksellista erilaisuutta luonnontieteisiin verrattuna, koska ne noudattavat erityistä ymmärtävää menetelmää.
Teollinen vallankumous perustui aluksi varsin yksinkertaiseen teknologiaan, kuten höyrykoneeseen, eikä tieteellisellä tutkimuksella ollut siinä suurtakaan roolia. Elinkeinoelämällä ei näin myöskään ollut erityistä syytä kiinnostua tieteellisestä tutkimuksesta. Sittemmin tilanne on muuttunut täysin. Nykyaikainen ”korkea teknologia” on tiiviisti kietoutunut yhteen uusimman tieteen kanssa. Esimerkiksi elektroniikkateollisuus perustuu olennaisesti kehittyneelle luonnontieteelle, ja biotieteiden kaupalliset sovellukset ovat merkittäviä. Elinkeinoelämä onkin noussut 1900-luvun loppupuolella yhä keskeisemmäksi tutkimuksen rahoittajaksi. Suomessa kuten muissakin ”kehittyneissä” maissa suurin osa tutkimus- ja kehittämistoiminnasta tehdään nykyisin jo yksityisellä rahoituksella. Tämä on tuonut mukanaan (...) myös aivan uudenlaisia yhteiskunnallisia ja eettisiä ongelmia ja haasteita. (shrink)
For example, Cheryl Misak in her book-length examination of verificationism writes that ‘the holist [such as Quine] need not reject verificationism, if it is suitably formulated. Indeed, Quine often describes himself as a verificationist’.[iii] Misak concludes that Quine ‘can be described as a verificationist who thinks that the unit of meaning is large’;[iv] and when comparing Dummett and Quine, Misak states that ‘both can be, and in fact are, verificationists’.[v].
Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
In the early 20th century, scepticism was common among philosophers about the very meaningfulness of the notion of truth – and of the related notions of denotation, definition etc. (i.e., what Tarski called semantical concepts). Awareness was growing of the various logical paradoxes and anomalies arising from these concepts. In addition, more philosophical reasons were being given for this aversion.1 The atmosphere changed dramatically with Alfred Tarski’s path-breaking contribution. What Tarski did was to show that, assuming that the syntax of (...) the object language is specified exactly enough, and that the metatheory has a certain amount of set theoretic power,2 one can explicitly define truth in the object language. And what can be explicitly defined can be eliminated. It follows that the defined concept cannot give rise to any inconsistencies (that is, paradoxes). This gave new respectability to the concept of truth and related notions. Nevertheless, philosophers’ judgements on the nature and philosophical relevance of Tarski’s work have varied. It is my aim here to review and evaluate some threads in this debate. (shrink)
Lucas and Redhead () announce that they will defend the views of Redhead () against the argument by Panu Raatikainen (). They certainly re-state the main claims of Redhead (), but they do not give any real arguments in their favour, and do not provide anything that would save Redhead’s argument from the serious problems pointed out in (Raatikainen ). Instead, Lucas and Redhead make a number of seemingly irrelevant points, perhaps indicating a failure to understand the logico-mathematical points at (...) issue. (shrink)
The prospects and limitations of defining truth in a finite model in the same language whose truth one is considering are thoroughly examined. It is shown that in contradistinction to Tarski's undefinability theorem for arithmetic, it is in a definite sense possible in this case to define truth in the very language whose truth is in question.
Philosopher’s judgements on the philosophical value of Tarski’s contributions to the theory of truth have varied. For example Karl Popper, Rudolf Carnap, and Donald Davidson have, in their different ways, celebrated Tarski’s achievements and have been enthusiastic about their philosophical relevance. Hilary Putnam, on the other hand, pronounces that “[a]s a philosophical account of truth, Tarski’s theory fails as badly as it is possible for an account to fail.” Putnam has several alleged reasons for his dissatisfaction,1 but one of them, (...) the one I call the modal objection (cf. Raatikainen 2003), has been particularly influential. In fact, very similar objections have been presented over and over again in the literature. Already in 1954, Arthur Pap had criticized Tarski’s account with a similar argument (Pap 1954). Moreover, both Scott Soames (1984) and John Etchemendy (1988) use, with an explicit reference to Putnam, similar modal arguments in relation to Tarski. Richard Heck (1997), too, shows some sympathy for such considerations. Simon Blackburn (1984, Ch. 8) has put forward a related argument against Tarski. Recently, Marian David has criticized Tarski’s truth definition with an analogous argument as well (David 2004, p. 389-390).2 This line of argument is thus apparently one of the most influential critiques of Tarski. It is certainly worthy of serious attention. Nevertheless, I shall argue that, given closer scrutiny, it does not present such an acute problem for the Tarskian approach to truth as many philosophers think. But I also believe that it is important to understand clearly why this is so. Moreover, I think that a careful consideration of the issue illuminates certain important but somewhat neglected aspects of the Tarskian approach. (shrink)
In his recent article Christopher Gauker (2001) has presented a thoughtprovoking argument against deﬂationist theories of truth. More exactly, he attacks what he calls ‘T-schema deﬂationism’, that is, the claim that a theory of truth can simply take the form of certain instances of the T-schema.
The issue of whether science is, or can be, value-free has been debated for more than a century. The idea of value-free science is of course as old as science itself, and so are the arguments against this idea. Plato defended it..
A natural problem from elementary arithmetic which is so strongly undecidable that it is not even Trial and Error decidable (in other words, not decidable in the limit) is presented. As a corollary, a natural, elementary arithmetical property which makes a diﬀerence between intuitionistic and classical theories is isolated.
The key argument of Hilary Putnam for conceptual relativism, his so-called mereological argument, is critically evaluated. It is argued that Putnam’s reasoning is based on confusion between languages and theories.
The importance of the exclusion argument for contemporary physicalism is emphasized. The recent attempts to vindicate reductive physicalism by invoking certain needed revisions to the Nagelian model of reduction are then discussed. It is argued that such revised views of reduction offer in fact much less help to reductive physicalism than is sometimes supposed, and that many of these views lead to trouble when combined with the exclusion argument.
Kysymys siitä, onko tiede ja voiko se olla arvovapaata, on herättänyt vilkasta ja jopa kiivastakin keskustelua. Erityisen polttava tämä kysymys on ihmistieteissä. Yhdessä ääripäässä on kuva tieteellisestä tutkimuksesta kaikenlaisten eettisten ja yhteiskunnallisten kysymysten yläpuolella olevana intressittömänä toimintana. Toisessa päässä on väite, ettei tiede voi koskaan olla arvovapaata vaan että tieteellinen tutkimus ja sen tulokset ovat läpeensä arvojen värittämiä. Näiden välille mahtuu monenlaisia maltillisempia välittäviä kantoja.
Gödel's two incompleteness theorems are among the most important results in modern logic, and have deep implications for various issues. They concern the limits of provability in formal axiomatic theories. The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F. According to the second incompleteness theorem, such a formal system cannot (...) prove that the system itself is consistent (assuming it is indeed consistent). These results have had a great impact on the philosophy of mathematics and logic. There have been attempts to apply the results also in other areas of philosophy such as the philosophy of mind, but these attempted applications are more controversial. The present entry surveys the two incompleteness theorems and various issues surrounding them. (shrink)
It has sometimes been suggested that the so-called new theory of reference (NTR) would provide an alternative picture of meaning and reference which avoids the unwelcome consequences of the meaning-variance thesis and incommesurability. However, numerous philosophers of science have been quite critical towards the idea and NTR in general. It is argued that many of them have an over-simplified and, in part, mistaken understanding of what NTR amounts to. It is submitted that NTR, when correctly understood, can be an important (...) ingredient in the realist toolkit for defending the rationality of science. (shrink)