This book comes to the rescue of scientific realism, showing that reports of its death have been greatly exaggerated. Philosophical realism holds that the aim of a particular discourse is to make true statements about its subject matter. Ilkka Niiniluoto surveys different kinds of realism in various areas of philosophy and then sets out his own critical realist philosophy of science.
According to the foundationalist picture, shared by many rationalists and positivist empiricists, science makes cognitive progress by accumulating justified truths. Fallibilists, who point out that complete certainty cannot be achieved in empirical science, can still argue that even successions of false theories may progress toward the truth. This proposal was supported by Karl Popper with his notion of truthlikeness or verisimilitude. Popper’s own technical definition failed, but the idea that scientific progress means increasing truthlikeness can be expressed by defining degrees (...) of truthlikeness in terms of similarities between states of affairs. This paper defends the verisimilitude approach against Alexander Bird who argues that the “semantic” definition is not sufficient to define progress, but the “epistemic” definition referring to justification and knowledge is more adequate. Here Bird ignores the crucial distinction between real progress and estimated progress, explicated by the difference between absolute degrees of truthlikeness and their evidence-relative expected values. Further, it is argued that Bird’s idea of returning to the cumulative model of growth requires an implausible trick of transforming past false theories into true ones. (shrink)
The modern history of verisimilitude can be divided into three periods. The first began in 1960, when Karl Popper proposed his qualitative definition of what it is for one theory to be more truthlike than another theory, and lasted until 1974, when David Miller and Pavel Trich published their refutation of Popper's definition. The second period started immediately with the attempt to explicate truthlikeness by means of relations of similarity or resemblance between states of affairs (or their linguistic representations); the (...) work within this similarity approach was summarized in the books of Graham Oddie  and Ilkka Niiniluoto . During the subsequent third period, studies in verisimilitude have been actively continued, and interesting results and applications have been achieved, but not many dramatic novelties. While it is now obsolete to claim that truthlikeness with reasonable properties cannot be defined at all, there is still a lot of controversy about the best and least arbitrary approach to doing this. (shrink)
Charles S. Peirce argued that, besides deduction and induction, there is a third mode of inference which he called " hypothesis " or " abduction." He characterized abduction as reasoning " from effect to cause," and as " the operation of adopting an explanatory hypothesis." Peirce ' s ideas about abduction, which are related also to historically earlier accounts of heuristic reasoning, have been seen as providing a logic of scientific discovery. Alternatively, abduction is interpreted as giving reasons for pursuing (...) a hypothesis. Inference to the best explanation has also been regarded as an important mode of justification, both in everyday life, detective stories, and science. In particular, scientific realism has been defended by an abductive nomiracle argument, while the critics of realism have attempted to show that this appeal to abduction is question - begging, circular, or incoherent. This paper approaches these issues by distinguishing weaker and stronger forms of abduction, and by showing how these types of inferences can be given Peircean and Bayesian probabilistic reconstructions. (shrink)
Belief revision (BR) and truthlikeness (TL) emerged independently as two research programmes in formal methodology in the 1970s. A natural way of connecting BR and TL is to ask under what conditions the revision of a belief system by new input information leads the system towards the truth. It turns out that, for the AGM model of belief revision, the only safe case is the expansion of true beliefs by true input, but this is not very interesting or realistic as (...) a model of theory change in science. The new accounts of non-prioritized belief revision do not seem more promising in this respect, and the alternative BR account of updating by imaging leads to other problems. Still, positive results about increasing truthlikeness by belief revision may be sought by restricting attention to special kinds of theories. Another approach is to link truthlikeness to epistemic matters by an estimation function which calculates expected degrees of truthlikeness relative to evidence. Then we can study how the expected truthlikeness of a theory changes when probabilities are revised by conditionalization or imaging. Again, we can ask under what conditions such changes lead our best theories towards the truth. (shrink)
Scientific realists use the “no miracle argument” to show that the empirical and pragmatic success of science is an indicator of the ability of scientific theories to give true or truthlike representations of unobservable reality. While antirealists define scientific progress in terms of empirical success or practical problem-solving, realists characterize progress by using some truth-related criteria. This paper defends the definition of scientific progress as increasing truthlikeness or verisimilitude. Antirealists have tried to rebut realism with the “pessimistic metainduction”, but critical (...) realists turn this argument into an optimistic view about progressive science. (shrink)
The modern history of verisimilitude can be divided into three periods. The first began in 1960, when Karl Popper proposed his qualitative definition of what it is for one theory to be more truthlike than another theory, and lasted until 1974, when David Miller and Pavel Trichý published their refutation of Popper's definition. The second period started immediately with the attempt to explicate truthlikeness by means of relations of similarity or resemblance between states of affairs (or their linguistic representations); the (...) work within this similarity approach was summarized in the books of Graham Oddie  and Ilkka Niiniluoto . During the subsequent third period, studies in verisimilitude have been actively continued, and interesting results and applications have been achieved, but not many dramatic novelties. While it is now obsolete to claim that truthlikeness with reasonable properties cannot be defined at all, there is still a lot of controversy about the best and least arbitrary approach to doing this. (shrink)
The distinction between basic and applied research is notoriously vague, despite its frequent use in science studies and in science policy. In most cases it is based on such pragmatic factors as the knowledge and intentions of the investigator or the type of research institute. Sometimes the validity of the distinction is denied altogether. This paper suggests that there are two ways of distinguishing systematically between basic and applied research: (i) in terms of the utilities that define the aims of (...) inquiry, and (ii) by reference to the structure of the relevant knowledge claims. An important type of applied research aims at results that are expressed by techical norms (in von Wright's sense): if you wish to achieveA, and you believe you are in a situationB, then you should doX. This conception of design sciences allows us to re-evaluate many issues in the history, philosophy, and ethics of science. (shrink)
In a seminar with the title “Deduction and Induction in the Sciences”, it is intriguing to ask the following questions: Is there a third type of inference besides deduction and induction? Does this third type of inference play a significant role within scientific inquiry? A positive answer to both of these questions was advocated by Charles S. Peirce throughout his career, even though his opinions changed in important ways during the fifty years between 1865 and 1914. Peirce called the third (...) kind of inference “hypothesis”, “abduction”, or “retroduction”.1 In this paper, I shall follow Peirce’s steps in discussing abduction by analyzing its logical form , its role in science , and the grounds of its validity . We shall see that Peirce’s discussion is more insightful than many recent attempts to analyze abductive inference. Still, recently some progress has been made in the treatment of abduction within the Bayesian theory of epistemic probability and truth-approximation . The results of this work support the view of scientific realism: abduction or inference to the best explanation , combined with empirical and experimental testing of scientific theories, is the best method of seeking informative truths in science. (shrink)
Following Herbert Simon’s idea of “the sciences of the artificial”, one may contrast descriptive sciences and design sciences: the former are concerned with “how things are”, the latter tell us “how things ought to be in order to attain goals, and to function”. Typical results of design sciences are thus expressions about means—ends relations or technical norms in G. H. von Wright’s sense. Theorizing and modeling are important methods of giving a value-free epistemic justification for such technical norms. The values (...) of design sciences are not criteria for the acceptance of theories or models, but rather antecedents of conditional recommendations of actions. Design sciences are thus value-neutral and value-laden at the same time. (shrink)
A holistic account of the meaning of theoretical terms leads scientific realism into serious troubles. Alternative methods of reference fixing are needed by a realist who wishes to show how reference invariance is possible in spite of meaning variance. This paper argues that the similarity theory of truthlikeness and approximate truth, developed by logicians since the mid 1970s, helps to make precise the idea of charitable theoretical reference. Comparisons to the recent proposals by Kitcher and Psillos are given. This argument (...) helps to undermine the scepticist meta-induction about theories, and thereby to reevaluate Laudan's alleged confutation of scientific realism. (shrink)
In 1958, to refute the argument known as the theoretician's dilemma, Hempel suggested that theoretical terms might be logically indispensable for inductive systematization of observational statements. This thesis, in some form or another, has later been supported by Scheffler, Lehrer, and Tuomela, and opposed by Bohnert, Hooker, Stegmüller, and Cornman. In this paper, a critical survey of this discussion is given. Several different putative definitions of the crucial notion inductive systematization achieved by a theory are discussed by reference to the (...) properties of inductive inference. The consequences of the following differences between deductive and inductive inference are emphasized: the lack of simple transitivity properties (even in a modified sense) of inductive inference, and the failure of the inductive analogue of the converse of The Deduction Theorem. The main conclusions are: (i) Hempel's original thesis may very well be right but his argument for it is unsatisfactory, (ii) theoretical terms can be logically indispensable for a non-Hempelian kind of inductive systematization, relative to both Craigian and Ramseyan elimination, (iii) Lehrer's attempt to prove the indispensability of theoretical terms for inductive-probabilistic systematization is, as a modification of Hempelian kind of inductive-deterministic systematization, unsatisfactory, and (iv) there does not seem to be much hope of escaping the conclusion (ii), if it is true, by extending the Craigian replacement programme along the lines suggested by Cornman. (shrink)
In a series of carefully argued and stimulating papers on realism, Usakli Maki has pointed out that economic theories typically are unrealistic in two senses: by violating "the-whole-truth" and "nothing-but-the-truth" (Maki 1989, 1992b, 1994b). He suggests that realism in economics can still be rescued by regarding theories as partially true descriptions of essences and as lawlike statements about tendencies. In this chapter, I defend realism by an alternative strategy: idealizational (or "isolational") statements are counterfactual conditional (Niiniluoto 1986), and the concepts (...) of truth and truthlikeness can be applied to them (Niiniluoto 1987). Further, false but sufficiently truthlike theories may be taken to refer to real entities in the world (Niiniluoto 1997, 1999). (shrink)
A hallmark of correspondence theories of truth is the principle that sentences are made true by some truth-makers. A well-known objection to treating Tarski’s definition of truth as a correspondence theory has been put forward by Donald Davidson. He argued that Tarski’s approach does not relate sentences to any entities (like facts) to which true sentences might correspond. From the historical viewpoint, it is interesting to observe that Tarski’s philosophical teacher Tadeusz Kotarbinski advocated an ontological doctrine of reism which accepted (...) only concrete individuals and rejected all such abstract entities as facts, states of affairs, properties, and sets. Kotarbinski’s physicalism influenced Tarski who also avoided concepts like “fact” and “property” in his theory of truth, but—unlike Kotarbinski—he used freely set-theoretical terminology. In his mature work in model theory in the 1950s, Tarski used systematically the notion of a relational system (i.e., a domain of objects with designated elements, subsets, and relations). Wilfrid Hodges has argued that the notions of “structure” and “truth in a structure” appeared in Tarski’s work only in 1950. In my view, one can find the main ingredients of the model-theoretic account of truth already in the 1930s. These considerations suggest, against Davidson, that Tarski’s definition presupposes that material truth is always related to some kind of truth-maker. Further, facts as truth-makers can be reconstructed by employing the resources of model theory. (shrink)
This paper studies the interplay between two notions which are important for the project of defending scientific realism: abduction and truthlikeness. The main focus is the generalization of abduction to cases where the conclusion states that the best theory is truthlike or approximately true. After reconstructing the recent proposals of Theo Kuipers within the framework of monadic predicate logic, I apply my own notion of truthlikeness. It turns out that a theory with higher truthlikeness does not always have greater empirical (...) success than its less truthlike rivals. It is further shown that the notion of expected truthlikeness provides a fallible link from the approximate explanatory success of a theory to its truthlikeness. This treatment can be applied also in cases where even the best potential theory is an idealization that is known to be false. (shrink)
This paper employs possible worlds semantics to develop a systematic framework for studying the syntax and the semantics of imagination sentences. Following Hintikka's treatment of prepositional attitudes like knowledge and perception, the propositional construction “a imagines that p” is taken as the basic form to which other sentences (such as “a imagines b”, “a imagines an F”, “a imagines b as an F”) are reduced through quantifiers ranging over ‘world lines’, i.e., functions picking out individuals from the relevant possible worlds (...) or scenes. This intensional analysis is compared and contrasted with Barwise and Perry's situation semantics. It is also suggested that the logic of imagination helps us to understand some peculiarities of fictional discourse. For example, acts of imagination can be directed towards fictional entities (e.g. Donald Duck, Anna Karenina) as well as real ones. Further, fictional texts, like novels, can be thought of as occurring within the scope of an imagination operator, relative to the author or the reader. The author of a fictional text T can be viewed as performing an illocutionary act of recommendation of the form: Let us imagine that T. (shrink)
According to the traditional requirement, formulated by William Whewell in his account of the “consilience of inductions” in 1840, a scientific hypothesis should have unifying power in the sense that it explains and predicts several mutually independent phenomena. Variants of this notion of consilience or unification include deductive, inductive, and approximate systematization. Inference from surprising phenomena to their theoretical explanations was called abduction by Charles Peirce. As a unifying theory is independently testable by new kinds of phenomena, it should also (...) receive confirmation from its empirical success. The study of the prospects of probabilistic Bayesianism to motivate this kind of criterion for abductive confirmation is shown to lead to two quite distinct conceptions of unification. (shrink)
This paper gives a critical evaluation of the philosophical presuppositions and implications of two current schools in the sociology of knowledge: the Strong Programme of Bloor and Barnes; and the Constructivism of Latour and Knorr-Cetina. Bloor's arguments for his externalist symmetry thesis (i.e., scientific beliefs must always be explained by social factors) are found to be incoherent or inconclusive. At best, they suggest a Weak Programme of the sociology of science: when theoretical preferences in a scientific community, SC, are first (...) internally explained by appealing to the evidence, e, and the standards or values, V, accepted in SC, then a sociologist may sometimes step in to explain why e and V were accepted in SC. Latour's story about the social construction of facts in scientific laboratories is found to be misleading or incredible. The idea that scientific reality is an artifact turns out to have some interesting affinities with classical pragmatism, instrumentalism, phenomenology, and internal realism. However, the constructivist account of theoretical entities in terms of negotiation and social consensus is less plausible than the alternative realist story which explains consensus by the preexistence of mind-independent real entities. The author concludes that critical scientific realism, developed with the concept of truthlikeness, is compatible with the thesis that scientific beliefs or knowledge claims may be relative to various types of cognitive and practical interests. However, the realist denies, with good reasons, the stronger type of relativism which takes reality and truth to be relative to persons, groups, or social interests. (shrink)
In a recent article, Goldstick and O'Neill propose a definition for the comparative "truer than" relation between rival propositions. This definition is studied here in a context where the concept of "convexity" is well defined for propositions. It turns out that the Goldstick-O'Neill definition gives a reasonable but very restricted sufficient condition for the "truer than" relation, but fails as a necessary condition.
Many scientific realists think that the best reasons for scientific theories are abductive, i.e., must appeal to what is also called inference to the best explanation (IBE), while some anti-realists have argued that the use of abduction in defending realism is question-begging, circular, or incoherent. This paper studies the idea that abductive inference can be reformulated by taking its conclusion to concern the truthlikeness of a hypothetical theory on the basis of its success in explanation and prediction. The strength of (...) such arguments is measured by the estimated verisimilitude of its conclusion given the premises. It is argued that this formulation helps to make precise and justifies the "ultimate argument for scientific realism": the empirical success of scientific theories would be a miracle unless they are truthlike. (shrink)
Woosuk Park’s paper “Misrepresentation in Context” is a useful plea for a theory of representation with promising interaction between cognitive science, philosophy of science, and aesthetics. In this paper, I argue that such a unified account is provided by Charles S. Peirce’s semiotics. This theory puts Park’s criticism of Nelson Goodman and Jerry Fodor in context. Some of Park’s pertinent remarks on the problem of misrepresentation can be illuminated by the account of truthlikeness and idealization developed by philosophers of science.
Popper distinguishes the problems of theoretical and pragmatic preference between rival theories, but he claims that there is a common non-inductive solution to both of them, viz. the "best-tested theory", or the theory with the highest degree of corroboration. He further suggests that the degrees of corroboration serve as indicators of verisimilitude. One may therefore raise the question whether the recent theory of verisimilitude gives a general non-inductive solution to the problem of theoretical preference. This paper argues that this is (...) not the case: the theory of verisimilitude is applicable to this problem if and only if there is an independent solution to the problem of induction. Moreover, the solutions to the theoretical and pragmatic problems of preference coincide only in some special cases. (shrink)
The concept of verisimilitude is an indispensable tool for the fallibilist and realist epistemology. Part of the argument for this thesis consists in the important applications of this notion within the history and philosophy of science. But perhaps the harder part is to convince a sceptical reader of the existence of this concept. A general programme for defining and estimating degrees of truthlikeness for various kinds of scientific statements is outlined in some detail. Ten years after Miller's and Tichy's refutation (...) of Popper's attempted definition, this paper reviews recent developments and debates, and concludes that the treatment the Popperian problem with Carnapian logical tools leads to a new synthesis which turns out to formally contain Levi's theory of epistemic utilities as a special case. (shrink)
The recent theories of truthlikeness have not paid attention to the distinction between lawlike and accidental generalizations. L.J. Cohen has expressed this by saying that science aims at legisimilitude rather than verisimilitude. G. Oddie has given a reply to Cohen by defining the notion of legisimilitude in terms of higher-order logics. This paper gives a different reply to Cohen by treating laws as physically necessary generalizations and by defining the notion of legisimilitude as closeness to a suitably chosen lawlike sentence.