This paper is intended as a contribution to a recent vigorous debate in The Times , between the distinguished journalist Bernard Levin, the eminent Oxford economist Wilfred Beckerman and the Archbishop of York, John Habgood, among others. The debate concerns morality, ‘free will’ and determinism. As a former German Jew, who lost close relatives at Auschwitz and who suffered personally severely in my youth under daily virulent Nazi persecution , I obviously cannot remain strictly detached and neutral. Yet, I shall (...) attempt to retain as much neutrality as possible, since I think that the main rivals in this debate have all some very relevant, interesting and valid things to say. Let me also state other, probably very relevant, biases. I am an ardent Zionist . In addition, I am a diehard mechanistic materialist as regards basic philosophy, although I am tolerant of other people's religious feelings, because I realize that my materialism is as metaphysical as their religious views. With this as background let me return to the technical issues. Obviously, in a philosophical journal one can write at a level above that of The Times , where there is, perhaps, insufficient room to debate philosophical, biological, physical and other niceties in some depth. (shrink)
This statement by the late Franz Rosenthal is, in a sense, the uniting theme of the present volume's 35 articles by renowned scholars of Islamic Studies, Middle ...
In epistemology and in philosophy of language there is fierce debate about the role of context in knowledge, understanding, and meaning. Many contemporary epistemologists take seriously the thesis that epistemic vocabulary is context-sensitive. This thesis is of course a semantic claim, so it has brought epistemologists into contact with work on context in semantics by philosophers of language. This volume brings together the debates, in a set of twelve specially written essays representing the latest work by leading figures in the (...) two fields. All future work on contextualism will start here. Contributors: Kent Bach, Herman Cappelen, Andy Egan, Michael Glanzberg, John Hawthorne, Ernest Lepore, Peter Ludlow, Peter Pagin, Georg Peter, Paul M. Pietroski, Gerhard Preyer, Jonathan Schaffer, Jason Stanley, Brian Weatherson, Timothy Williamson. (shrink)
We start this paper by arguing that causality should, in analogy with force in Newtonian physics, be understood as a theoretical concept that is not explicated by a single definition, but by the axioms of a theory. Such an understanding of causality implicitly underlies the well-known theory of causal nets and has been explicitly promoted by Glymour. In this paper we investigate the explanatory warrant and empirical content of TCN. We sketch how the assumption of directed cause–effect relations can be (...) philosophically justified by an inference to the best explanation. We then ask whether the explanations provided by TCN are merely post-facto or have independently testable empirical content. To answer this question we develop a fine-grained axiomatization of TCN, including a distinction of different kinds of faithfulness. A number of theorems show that although the core axioms of TCN are empirically empty, extended versions of TCN have successively increasing empirical content. (shrink)
This article describes abductions as special patterns of inference to the best explanation whose structure determines a particularly promising abductive conjecture and thus serves as an abductive search strategy. A classification of different patterns of abduction is provided which intends to be as complete as possible. An important distinction is that between selective abductions, which choose an optimal candidate from given multitude of possible explanations, and creative abductions, which introduce new theoretical models or concepts. While selective abduction has dominated the (...) literature, creative abductions are rarely discussed, although they are essential in science. The article introduces several kinds of creative abductions, such as theoretical model abduction, common cause abduction and statistical factor analysis, and illustrates them by various real case examples. It is suggested to demarcate scientifically fruitful abductions from purely speculative abductions by the criterion of causal unification. (shrink)
There are two ways of representing rational belief: qualitatively as yes-or-no belief, and quantitatively as degrees of belief. Standard rationality conditions are: consistency and logical closure, for qualitative belief, satisfaction of the probability axioms, for quantitative belief, and a relationship between qualitative and quantitative beliefs in accordance with the Lockean thesis. In this paper, it is shown that these conditions are inconsistent with each of three further rationality conditions: fallibilism, open-mindedness, and invariance under independent conceptual expansions. Restrictions of the Lockean (...) thesis that have been suggested in the literature cannot remove the inconsistency. In the outlook we discuss two alternative ways of dealing with this problem: restricting conjunctive closure or going for a dual system account. (shrink)
Philosophy of Science: A Unified Approach combines a general introduction to philosophy of science with an integrated survey of all its important subfields. As the book’s subtitle suggests, this excellent overview is guided methodologically by "a unified approach" to philosophy of science: behind the diversity of scientific fields one can recognize a methodological unity of the sciences. This unity is worked out in this book, revealing all the while important differences between subject areas. Structurally, this comprehensive book offers a two-part (...) approach, which makes it an excellent introduction for students new to the field and a useful resource for more advanced students. Each chapter is divided into two sections. The first section assumes no foreknowledge of the subject introduced, and the second section builds upon the first by bringing into the conversation more advanced, complementary topics. Definitions, key propositions, examples and figures overview all of the core material. At the end of every chapter there are selected readings and exercises . The book also includes a comprehensive bibliography and an index. (shrink)
This article suggests a ‘best alternative' justification of induction (in the sense of Reichenbach) which is based on meta-induction . The meta-inductivist applies the principle of induction to all competing prediction methods which are accessible to her. It is demonstrated, and illustrated by computer simulations, that there exist meta-inductivistic prediction strategies whose success is approximately optimal among all accessible prediction methods in arbitrary possible worlds, and which dominate the success of every noninductive prediction strategy. The proposed justification of meta-induction is (...) mathematically analytical. It implies, however, an a posteriori justification of object-induction based on the experiences in our world. *Received November 2005; revised March 2008. †To contact the author, please write to: Department of Philosophy, University of Duesseldorf, Universitaetsstrasse 1, Geb. 23.21, Duesseldorf, Germany D-40225; e-mail: gerhard.schurz@phil-fak.uni-duesseldorf.de. (shrink)
The basic theory of scientific understanding presented in Sections 1–2 exploits three main ideas.First, that to understand a phenomenonP (for a given agent) is to be able to fitP into the cognitive background corpusC (of the agent).Second, that to fitP intoC is to connectP with parts ofC (via arguments in a very broad sense) such that the unification ofC increases.Third, that the cognitive changes involved in unification can be treated as sequences of shifts of phenomena inC. How the theory fits (...) typical examples of understanding and how it excludes spurious unifications is explained in detail. Section 3 gives a formal description of the structure of cognitive corpuses which contain descriptive as well as inferential components. The theory of unification is then refined in the light of so called puzzling phenomena, to enable important distinctions, such as that between consonant and dissonant understanding. In Section 4, the refined theory is applied to several examples, among them a case study of the development of the atomic model. The final part contains a classification of kinds of understanding and a discussion of the relation between understanding and explanation. (shrink)
This paper presents an outline of a new theory of relevant deduction which arose from the purpose of solving paradoxes in various fields of analytic philosophy. In distinction to relevance logics, this approach does not replace classical logic by a new one, but distinguishes between relevance and validity. It is argued that irrelevant arguments are, although formally valid, nonsensical and even harmful in practical applications. The basic idea is this: a valid deduction is relevant iff no subformula of the conclusion (...) is replaceable on some of its occurrences by any other formula salva validitate of the deduction. The paper first motivates the approach by showing that four paradoxes seemingly very distant from each other have a common source. Then the exact definition of relevant deduction is given and its logical properties are investigated. An extension to relevance of premises is discussed. Finally the paper presents an overview of its applications in philosophy of science, ethics, cognitive psychology and artificial intelligence. (shrink)
Function and teleology can be naturalized either by reference to systems with a particular type of organization or by reference to a particular kind of history. As functions are generally ascribed to states or traits according to their current role and regardless of their origin, etiological accounts are inappropriate. Here, I offer a systems-theoretical interpretation as a new version of an organizational account of functionality, which is more comprehensive than traditional cybernetic views and provides explicit criteria for empirically testable function (...) ascriptions. I propose, that functional states, traits or items are those components of a complex system, which are under certain circumstances necessary for their self-re-production. I show, how this notion can be applied in intra- and trans-generational function ascriptions in biology, how it can deal with the problems of multifunctionality and functional equivalents, and how it relates to concepts like fitness and adaptation. Finally, I argue that most intentional explanations can be treated as functional explanations. (shrink)
It has not been sufficiently considered in philosophical discussions of ceteris paribus (CP) laws that distinct kinds of CP-laws exist in science with rather different meanings. I distinguish between (1.) comparative CP-laws and (2.) exclusive CP-laws. There exist also mixed CP-laws, which contain a comparative and an exclusive CP-clause. Exclusive CP-laws may be either (2.1) definite, (2.2) indefinite or (2.3) normic. While CP-laws of kind (2.1) and (2.2) exhibit deductivistic behaviour, CP-laws of kind (2.3) require a probabilistic or non-monotonic reconstruction. (...) CP-laws of kind (1) may be both deductivistic or probabilistic. All these kinds of CP-laws have empirical content by which they are testable, except CP-laws of kind (2.2) which are almost vacuous. Typically, CP-laws of kind (1) express invariant correlations, CP-laws of kind (2.1) express closed system laws of physical sciences, and CP-laws of kind (2.3) express normic laws of non-physical sciences based on evolution-theoretic stability properties. (shrink)
Zwart and Franssen’s impossibility theorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts , content-aspects and likeness-aspects of verisimilitude are not in conflict with each other, but (...) in agreement . We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic (object-language) versus semantic (meta-language) formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence (Sect. 6). (shrink)
Normic laws have the form "if A, then normally B." They are omnipresent in everyday life and non-physical 'life' sciences such as biology, psychology, social sciences, and humanities. They differ significantly from ceteris-paribus laws in physics. While several authors have doubted that normic laws are genuine laws at all, others have argued that normic laws express a certain kind of prototypical normality which is independent of statistical majority. This paper presents a foundation for normic laws which is based on generalized (...) evolution theory and explains their omnipresence, lawlikeness, and reliability. It is argued that the fact that normic laws are a product of evolution must establish a systematic connection between prototypical and statistical normality. (shrink)
Starting from a brief recapitulation of the contemporary debate on scientific realism, this paper argues for the following thesis : Assume a theory T has been empirically successful in a domain of application A, but was superseded later on by a superior theory T * , which was likewise successful in A but has an arbitrarily different theoretical superstructure. Then under natural conditions T contains certain theoretical expressions, which yielded T's empirical success, such that these T-expressions correspond (in A) to (...) certain theoretical expressions of T * , and given T * is true, they refer indirectly to the entities denoted by these expressions of T * . The thesis is first motivated by a study of the phlogiston–oxygen example. Then the thesis is proved in the form of a logical theorem , and illustrated by further examples. The final sections explain how the correspondence theorem justifies scientific realism and work out the advantages of the suggested account. Introduction: Pessimistic Meta-induction vs. Structural Correspondence The Case of the Phlogiston Theory Steps Towards a Systematic Correspondence Theorem The Correspondence Theorem and Its Ontological Interpretation Further Historical Applications Discussion of the Correspondence Theorem: Objections and Replies Consequences for Scientific Realism and Comparison with Other Positions 7.1 Comparison with constructive empiricism 7.2 Major difference from standard scientific realism 7.3 From minimal realism and correspondence to scientific realism 7.4 Comparison with particular realistic positions CiteULike Connotea Del.icio.us What's this? (shrink)
This paper supplements an earlier one (Wassermann 1978b). Its views aim to reinforce those of Lewontin and other prominent evolutionists, but differ significantly from the opinions of some philosophers of science, notably Popper (1957) and Olding (1978). A basic distinction is made between 'laws' and 'theories of mechanisms'. The 'Theory of Evolution' is not characterized by laws, but is viewed here as a hypertheory which explains classifiable evolutionary phenomena in terms of subordinate classifiable theories of 'evolution-specific mechanisms' (ESMs), each of (...) which could apply to a host of species. Adaptations could result from ESMs that are rooted in molecular complementarities. The status of optimization theories that aim to predict best adapted states of organisms or populations is also discussed. (shrink)
The expansion or revision of false theories by true evidence does not always increase their verisimilitude. After a comparison of different notions of verisimilitude the relation between verisimilitude and belief expansion or revision is investigated within the framework of the relevant element account. We are able to find certain interesting conditions under which both the expansion and the revision of theories by true evidence is guaranteed to increase their verisimilitude.
The constraint against harming people in order to save yourself and others seems stronger than the constraint against harming people as a consequence of saving yourself and others. The reduced constraint against acting in one type of case is often justified with reference to the intentions of the agent or to the fact that she does not use the people she harms as a means. In this article I offer a victim-centered account. I argue that the circumstances in which the (...) people to be harmed find themselves are significant in distinguishing morally between two instances of harming. (shrink)
According to the paradigm of adaptive rationality, successful inference and prediction methods tend to be local and frugal. As a complement to work within this paradigm, we investigate the problem of selecting an optimal combination of prediction methods from a given toolbox of such local methods, in the context of changing environments. These selection methods are called meta-inductive strategies, if they are based on the success-records of the toolbox-methods. No absolutely optimal MI strategy exists—a fact that we call the “revenge (...) of ecological rationality”. Nevertheless one can show that a certain MI strategy exists, called “AW”, which is universally long-run optimal, with provably small short-run losses, in comparison to any set of prediction methods that it can use as input. We call this property universal access-optimality. Local and short-run improvements over AW are possible, but only at the cost of forfeiting universal access-optimality. The last part of the paper includes an empirical study of MI strategies in application to an 8-year-long data set from the Monash University Footy Tipping Competition. (shrink)
"This book represents a continuation of the research project in philosophy of language and semantics represented in the journal "Protosociology" at the J. W. ...
According to the comparative Bayesian concept of confirmation, rationalized versions of creationism come out as empirically confirmed. From a scientific viewpoint, however, they are pseudo-explanations because with their help all kinds of experiences are explainable in an ex-post fashion, by way of ad-hoc fitting of an empirically empty theoretical framework to the given evidence. An alternative concept of confirmation that attempts to capture this intuition is the use novelty criterion of confirmation. Serious objections have been raised against this criterion. In (...) this paper I suggest solutions to these objections. Based on them, I develop an account of genuine confirmation that unifies the UN-criterion with a refined probabilistic confirmation concept that is explicated in terms of the confirmation of evidence-transcending content parts of the hypothesis. (shrink)
Like scientific theories, metaphysical theories can and should be justified by the inference of creative abduction. Two rationality conditions are proposed that distinguish scientific from speculative abductions: achievement of unification and independent testability. Particularly important in science is common cause abduction. The justification of metaphysical realism is structurally similar to scientific abductions: external objects are justified as common causes of perceptual experiences. While the reliability of common cause abduction is entailed by a principle of causality, the latter principle has an (...) abductive justification based on statistical phenomena. (shrink)
This paper has three goals. The first goal is to work out the difference between literal ceteris paribus laws in the sense of “all others being equal” and ceteris rectis “laws” in the sense of “all others being right”. While cp laws involve a universal quantification, cr generalizations involve an existential quantification over the values of the remainder variables Z. As a result, the two differ crucially in their confirmability and lawlikeness. The second goal is to provide a classification of (...) different kinds of cr generalizations, including certain transition cases between cr generalizations and cp laws. The third goal is to work out what cp laws and all kinds of cr assertions have in common: they figure as an information source for assertions of causal influence between variables. (shrink)
Systems of logico-probabilistic reasoning characterize inference from conditional assertions that express high conditional probabilities. In this paper we investigate four prominent LP systems, the systems _O, P_, _Z_, and _QC_. These systems differ in the number of inferences they licence _. LP systems that license more inferences enjoy the possible reward of deriving more true and informative conclusions, but with this possible reward comes the risk of drawing more false or uninformative conclusions. In the first part of the paper, we (...) present the four systems and extend each of them by theorems that allow one to compute almost-tight lower-probability-bounds for the conclusion of an inference, given lower-probability-bounds for its premises. In the second part of the paper, we investigate by means of computer simulations which of the four systems provides the best balance of reward versus risk. Our results suggest that system _Z_ offers the best balance. (shrink)
In the first part I argue that normic laws are the phenomenological laws of evolutionary systems. If this is true, then intuitive human reasoning should be fit in reasoning from normic laws. In the second part I show that system P is a tool for reasoning with normic laws which satisfies two important evolutionary standards: it is probabilistically reliable, and it has rules of low complexity. In the third part I finally report results of an experimental study which demonstrate that (...) intuitive human reasoning is in well accord with basic argument patterns of system P. (shrink)
Zwart and Franssen’s impossibility theorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts, content-aspects and likeness-aspects of verisimilitude are not in conflict with each other, but in (...) agreement. We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic versus semantic formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence. (shrink)
This paper elaborates on the following correspondence theorem (which has been defended and formally proved elsewhere): if theory T has been empirically successful in a domain of applications A, but was superseded later on by a different theory T* which was likewise successful in A, then under natural conditions T contains theoretical expressions which were responsible for T’s success and correspond (in A) to certain theoretical expressions of T*. I illustrate this theorem at hand of the phlogiston versus oxygen theories (...) of combustion, and the classical versus relativistic theories of mass. The ontological consequences of the theorem are worked out in terms of the indirect reference and partial truth. The final section explains how the correspondence theorem may justify a weak version of scientific realism without presupposing the no-miracles argument. (shrink)
The no free lunch theorem is a radicalized version of Hume’s induction skepticism. It asserts that relative to a uniform probability distribution over all possible worlds, all computable prediction algorithms—whether ‘clever’ inductive or ‘stupid’ guessing methods —have the same expected predictive success. This theorem seems to be in conflict with results about meta-induction. According to these results, certain meta-inductive prediction strategies may dominate other methods in their predictive success. In this article this conflict is analyzed and dissolved, by means of (...) probabilistic analysis and computer simulation. (shrink)
Two key ideas of scientific explanation−explanation as causal information and explanation as unification-have frequently been set into mutual opposition. This paper proposes a “dialectical solution” to this conflict, by arguing that causal explanations are preferable to non-causal ones, because they lead to a higherdegree of unification at the level of explaining statistical regularities. The core axioms of the theory of causal nets are justified because they offer the best if not the only unifying explanation of two statistical phenomena: screening off (...) and linking up. Alternative explanations of the two phenomena are discussed and it isshown why they don’t work. It is demonstrated that although the core axioms of TC are empirically vacuous, extended versions of TC have empirical content by means of which they can generate independently testable predictions.Con frecuencia se han planteado como contrapuestas dos ideas clave en la explicacion cientifica. El presente articulo propone una “solucion dialectica” argumentando que las explicaciones causales son preferibles a las no-causales porque aquellas comportan un mayor grado de unificacion en la explicacion de regularidades estadisticas. Los axiomas centrales de la teoria de redes causales estan justificados porque ofrecen la mejor, si no la unica, explicacion unificada de dos fenomenos estadisticos: neutralizacion y vinculacion. Se discuten las explicaciones alternativas de estos dos fenomenos y se razona por que no funcionan. Se demuestra ademas que aunque los axiomas centrales de TC son empiricamente vacuos, las versiones extendidas de TC tienen un contenido empirico gracias al cual pueden generar predicciones independientemente contrastables. (shrink)
In this paper a new conception of foundation-oriented epistemology is developed. The major challenge for foundation-oriented justifications consists in the problem of stopping the justificational regress without taking recourse to dogmatic assumptions or circular reasoning. Two alternative accounts that attempt to circumvent this problem, coherentism and externalism, are critically discussed and rejected as unsatisfactory. It is argued that optimality arguments are a new type of foundation-oriented justification that can stop the justificational regress. This is demonstrated on the basis of a (...) novel result in the area of induction: the optimality of meta-induction. In the final section the method of optimality justification is generalized to deductive and abductive inferences. (shrink)
, Pietroski and Rey ([1995]) suggested a reconstruction of ceteris paribus (CP)-laws, which — as they claim — saves CP-laws from vacuity. This discussion note is intended to show that, although Pietroski and Rey's reconstruction is an improvement in comparison to previous suggestions, it cannot avoid the result that CP-laws are almost vacuous. It is proved that if Cx is an arbitrary (nomological) event-type which has independently identifiable deterministic causes, then for every other (nomological) event-type Ax which is not strictly (...) connected with Cx or with ¬Cx, ‘CP if Ax then Cx’ satisfies the conditions of Pietroski and Rey for CP-laws. It is also shown that Pietroski and Rey's reconstruction presupposes the assumption of determinism. The conclusion points towards some alternatives to Piectroski and Rey's reconstruction. (shrink)
The justification of induction is of central significance for cross-cultural social epistemology. Different ‘epistemological cultures’ do not only differ in their beliefs, but also in their belief-forming methods and evaluation standards. For an objective comparison of different methods and standards, one needs (meta-)induction over past successes. A notorious obstacle to the problem of justifying induction lies in the fact that the success of object-inductive prediction methods (i.e., methods applied at the level of events) can neither be shown to be universally (...) reliable (Hume's insight) nor to be universally optimal. My proposal towards a solution of the problem of induction is meta-induction. The meta-inductivist applies the principle of induction to all competing prediction methods that are accessible to her. By means of mathematical analysis and computer simulations of prediction games I show that there exist meta-inductive prediction strategies whose success is universally optimal among all accessible prediction strategies, modulo a small short-run loss. The proposed justification of meta-induction is mathematically analytical. It implies, however, an a posteriori justification of object-induction based on the experiences in our world. In the final section I draw conclusions about the significance of meta-induction for the social spread of knowledge and the cultural evolution of cognition, and I relate my results to other simulation results which utilize meta-inductive learning mechanisms. (shrink)
It is well known that the process of scientific inquiry, according to Peirce, is drivenby three types of inference, namely abduction, deduction, and induction. What isbehind these labels is, however, not so clear. In particular, the common identificationof abduction with Inference to the Best Explanation (IBE) begs the question,since IBE appears to be covered by Peirce's concept of induction, not that of abduction.Consequently, abduction ought to be distinguished from IBE, at least on Peirce's account. The main aim of the paper, (...) however, is to show that this distinction is most relevant with respect to current problems in philosophy of science and epistemology (like attempts to supply suitable notions of realism and truth as well as related concepts like coherence and unification). In particular, I also try to show that (and in what way) Peirce's inferential triad can function as a method that ensures both coherence and correspondence. It is in this respect that his careful distinction between abduction and induction (or IBE) ought to be heeded. (shrink)