Philosophy as Conceptual Engineering Inductive Logic in Rudolf Carnap's Scientific Philosophy by Christopher Forbes French B.A., Kansas State University, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate and Postdoctoral Studies (Philosophy) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2015 c⃝ Christopher Forbes French 2015 Abstract My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap's mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design problems from current work in history of engineering and technology studies. I then model Carnap's practice based on those lessons and uncover ways in which Carnap's technical work in inductive logic takes some of these lessons on board. This shows ways in which Carnap's philosophical project subtly changes right through his late work on induction, providing an important corrective to interpretations that ignore the work on inductive logic. Specifically, I show that paying attention to the historical details of Carnap's attempt to apply his work in inductive logic to decision theory and theoretical statistics in the 1950s and 1960s helps us understand how Carnap develops and rearticulates the philosophical point of the practical/theoretical distinction in his late work, offering thus a new interpretation of Carnap's technical work within the broader context of philosophy of science and analytical philosophy in general. ii Preface This dissertation is an original and independent work by the author, C. F. French. Some of the ideas for section 4.5 were first explored and discussed in my forthcoming publication (expected fall 2015): C. F. French, "Rudolf Carnap: Philosophy of Science as Engineering Explications." In Recent Developments in the Philosophy of Science: EPSA13 Helsinki. (Eds.) Uskali Mäki, Stephanie Ruphy, Gerhard Schurz and Ioannis Votsis. I am the sole author of this publication. I originally intended there to be an additional chapter in this dissertation discussing Carnap's correspondence with Richard C. Jeffrey. Unfortunately, I was forced to cut this material. See my forthcoming publication: C. F. French, "Explicating Formal Epistemology: Carnap's Legacy as Jeffrey's Radical Probabilism." In Studies in the History and Philosophy of Science. Guest edited by Sahotra Sarkar and Thomas Uebel. I am the sole author of this publication. This dissertation makes extensive use of archival material from the Carl Hempel, Rudolf Carnap and Richard C. Jeffrey papers at the Archives for Scientific Philosophy at the University of Pittsburgh. Quoted by permission of the University of Pittsburgh. All rights reserved. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Carnapian Wissenschaftslogik as Conceptual Engineering . . . . . . . . . . . 7 2.1 Carnap's Wissenschaftslogik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Wissenschaftslogik : Critiques and Reappraisals . . . . . . . . . . . . . . . . . . . 22 2.3 Carnapian wissenschaftslogiker as Conceptual Engineer . . . . . . . . . . . . . . 29 2.4 Carnap and the State of Inductive Logic at mid-Twentieth Century . . . . . . . 37 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3 Philosophical Method as Conceptual Engineering . . . . . . . . . . . . . . . . 53 3.1 Engineering as Means-End Reasoning . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Engineering Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Satisficing Wings and Propellers . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4 Changing Designs and Braking Barriers . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 Herbert Simon and Satisficing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 iv Table of Contents 3.6 Carnap as Conceptual Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4 Designing Inductive Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.1 Historical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2 Carnap's Confirmation Function c∗ . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.3 From Confirmation to Estimation Functions . . . . . . . . . . . . . . . . . . . . 98 4.4 Carnap's Continuum of Inductive Methods . . . . . . . . . . . . . . . . . . . . . 102 4.5 Finding Optimal Values of λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5 Constructing Rational Decision Theory . . . . . . . . . . . . . . . . . . . . . . . 130 5.1 Carnap on Hume's Problem of Induction . . . . . . . . . . . . . . . . . . . . . . 132 5.2 Ramsey's Decision Theory as Qualified Psychologism . . . . . . . . . . . . . . . 135 5.3 Feigl, Reichenbach and Justifying Induction Pragmatically . . . . . . . . . . . . 140 5.4 Inductive Logic, Expected Utility Theory and Decision Theory . . . . . . . . . . 151 5.5 Rationalizing Decision Theory and Justifying Inductive Logic . . . . . . . . . . . 158 5.6 The Aim of Inductive Logic and Robot Epistemology . . . . . . . . . . . . . . . 172 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 v List of Figures 3.1 Means-end Model of Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2 Hierarchical Model of Engineering Design and Knowledge . . . . . . . . . . . . . 59 4.1 A "Well-connected" System of Inductive Concepts . . . . . . . . . . . . . . . . . . 127 vi List of Abbreviations Throughout the dissertation I use the following abbreviations to refer to various archives: ASP Archives for Scientific Philosophy at the University of Pittsburgh. CH Carl Hempel archives at ASP. HR Hans Reichenbach archives at ASP. RC Rudolf Carnap archives at ASP. For example, "RC 079-20-01" refers to the document numbered 01 in the folder numbered 20 in the box numbered 079. RCJ Richard C. Jeffrey archives at ASP. I also make frequent use of the following acronyms to refer to Carnap's published works: CIM The Continuum of Inductive Methods, 1952. ESO "Empiricism, Semantics and Ontology", 1950 (reprinted and enlarged in the second edition of Meaning and Necessity, 1956). LFP The Foundations of Logical Probability, 1950 (second edition, 1962). LSL The Logical Syntax of Language, 1937. vii Acknowledgements As an undergraduate at Kansas State University I made the transition from self-identifying as an artist, a programmer and a wanna-be hacker (of the MIT/Richard Stallman, not the criminal, variety) to being an academic philosopher. I'm appreciative to all of the faculty at KSU's philosophy department who was there from 2003 to 2008. I especially want to thank Bruce Glymour for his patience, advice and guidance in helping me not only gain expertise in the philosophy of biology and causal modeling but also getting into grad school. I also want to thank Andrew Arana for allowing me to make a copy of a paper by Alberto Coffa discussing Wittgenstein, Carnap and logical tolerance: I've been hooked ever since. For better or worse, I've always brought my programming sensibilities to traditional philosophical problems – for example, when I first read Carnap's Aufbau as an undergrad, I somewhat naively read him as painstakingly providing us with an algorithm for constructing the world on the basis of pairs of elementary experiences. Perhaps as a consequence of this sensibility, I am never easily impressed by appeals to philosophical authority, common sense or expertise and rarely do I put much stock, if any at all, in the justificatory value (as opposed to the rhetorical or pedagogical value) of philosophical thought experiments and intuition-pumps. Arguments come cheap: I want the dirty and messy technical, conceptual and empirical details – tell me how to build up these epistemological, metaphysical or ethical world-views from scratch, brick by interlocking brick. After moving to Vancouver, I had tried to suppress this engineering reading of Carnap's scientific philosophy as I've journeyed through the conceptual landscapes on offer by Kant, Marburg neo-Kantians like Ernst Cassirer, the logical empiricists, the American pragmatists and contemporary philosophers of science. But as should be evident from the title of this dissertation, I've come full circle to embrace a version of the engineering sensibility I thought I had left behind; indeed, it turns out that it is exactly because Carnap as scientific philosopher embraces this sensibility that I find his work so valuable and original. I would like to thank my intellectual peers and dearest friends and colleagues who have been viii Acknowledgements there from the beginning (more or less): S. Andrew Inkpen, Dani Hallet, Taylor Davis and Rebecca Trainor. I would also like to thank my fellow graduate students and friends at UBC: Joel Burnett, Tyler DesRoches, Roger Stanev, Alirio Rosales, Jihee Han, A.J. Snelson, Emma Esmaili, Gerardo Viera, Servaas van der Berg, Sina Fazelpour, Richard Sandlin, Jiwon Byun, Kousaku Yui and Laura Keith, Aleksey Balotskiy and Kaitlin Graves, Garson Leder and Serban Dragulin. A special thanks to Stefan Lukits for putting so much work into the UBC Formal Epistemology reading group and for providing me with valuable comments on chapter 4. While a resident at Green College at UBC from 2009 to 2011 I had the pleasure of meeting many amazing people, including my friends Dan Randles, Wanying Zhao, Simon Viel, Yuan Jiang, Nathan Corbett, Maciek Chudek and Andrew MacDonald. I would also like to thank the following people who I met as a visiting fellow at TiLPS in Tilburg, Netherlands: Jan Sprenger, Stephan Hartmann, Rogier De Langhe and Juan M. Duran. Thanks to Sahotra Sarkar and Thomas Uebel for giving me so many comments on my contribution to a 2013 workshop at Austin, Texas on formal epistemology and the legacy of logical empiricism. I would also like to thank the following friends and colleagues, past and present, I have met either at UBC or in Vancouver more generally: Flavia Padovani, Uri Burstyn, Jon Tsou, John Koolage, Scott Edgar, Samantha Matherne, Daniel Kuby, Dan Raber and Christina Marie Moth. I would like to thank the members of my committee: John Beatty, Christopher Stephens and especially Richard Creath. I would also like to thank several other faculty and staff at UBC (even if we haven't always seen eye to eye on philosophical matters), both past and present: Margaret Schabas, Paul Bartha, Eric Margolis, Nissa Bell, Rhonda Janzen, David Silver, Adam Morton, Ori Simchen, Roberta Ballarin and John Woods. But most of all I would like to thank my supervisor, Alan Richardson. Although I've encountered my fair share of travails and tribulations while finishing the dissertation, Alan has helped me to become a more confident, independent, thinker who (I hope) doesn't completely suck at writing. We can't all write like Rudy or Alan, but we can keep on trying. C. F. French, July 15, Vancouver. ix Dedication . To my parents, Janet and Donald, and my brother, Michael. x Chapter 1 Introduction In my view, the purpose of inductive logic is precisely to improve our guesses and, what is of even more fundamental importance, to improve our general methods for making guesses, and especially for assigning numbers to our guesses according to certain rules. And these rules are likewise regarded as tentative; that is to say, as liable to be replaced later by other rules which then appear preferable to us. We can never claim that our method is perfect. I say all this only in order to make quite clear that inductive logic is compatible with the basic attitude of scientists; namely, the attitude of looking for continuous improvement while rejecting any absolutism. - Rudolf Carnap, "Probability and Content Measure", (1966) Rudolf Carnap was a twentieth century scientific philosopher who used logic to reformulate seemingly intractable philosophical questions about the nature of science into clearly defined technical questions formulated within a logical system. Influenced by philosophers and mathematicians like Ernst Cassirer, Bertrand Russell, David Hilbert, Gottlob Frege and Ludwig Wittgenstein, he articulated an early version of this scientific philosophy in his 1928 book Der Logische Aufbau der Welt, a document which would quickly become a cynosure for members of both the Vienna Circle and analytical philosophy in North America and the United Kingdom. It was in the Aufbau that Carnap attempted to secure the objectivity of scientific knowledge by showing how one could logically reconstruct the structure of scientific knowledge and thus demonstrating how scientific knowledge is inter-subjectively communicable.1 By the time Carnap published his Logische Syntax der Sprache in 1934, however, his earlier conceptions of logic and mathematics had undergone a radical transformation. He now embraced an attitude of logical tolerance according to which there is no "correct" logical system but instead there are infinitely many logical systems, each of which is more or less sufficient for reformulating scientific language. Logic, for Carnap, was now understood as an instrument chosen for practical reasons of expedience rather than correctness. This is the maturation of Carnap's scientific 1 See Friedman (1999) and Richardson (1998). 1 Chapter 1. Introduction philosophy: traditional philosophy is to be replaced by the logic of science; the philosopher is now envisaged as a wissenschaftslogiker – a member of a technocratic community tasked with supplying new logical techniques, new logical technologies, to be used for the clarification and systematization of scientific language and concepts. The fundamental question my dissertation seeks to answer – namely, the question: How exactly did Carnap understand the way in which his practically minded logic of science could possibly be used to help clarify questions about the foundations of science, especially if we understand such questions to be metaphysical or epistemological in nature – is not a new question. Indeed, there now exists an extensive Carnap reappraisal literature which, in part, attempts to explain how exactly Carnap tried to marshal the conceptual and technical resources available to him in order to reformulate traditional philosophical questions into either expressions of one's preference for one logical system over others or into questions about the logical syntax or semantics of a logical system. And one of the ways in which philosophers working within this reappraisal literature have tried to explain Carnap's debates with the twentieth-century scientific philosopher W.v.O. Quine regarding whether or not we should give up on the logic of science in favor of just looking to science itself (especially psychology) to answer questions about the foundations of science is by interpreting Carnapian logic of science as a kind of linguistic or conceptual engineering activity (see chapter 2). My dissertation contributes to this Carnap reappraisal literature by examining Carnap's logic of science not with regards to his work on pure logical syntax and semantics but rather from the perspective of his longest-running technical project, a project which has received surprisingly scant attention in the reappraisal literature; namely, Carnap's work on inductive logic from the mid-1940s until the late 1960s which he then uses to address foundational problems about induction and probability in the sciences, especially the foundations of statistics and decision theory. In chapter 3, I draw on case studies from the history of engineering to articulate a hierarchical conception of engineering design and I then use this conception of engineering as an interpretative framework to reshape Carnap's work on pure and applied inductive logic in chapters 4 and 5. In particular, I argue that Carnap was ultimately not in the business of searching for the correct account of inductive logic. But instead, he was engaged in a project which is similar 2 Chapter 1. Introduction to what Herbert Simon calls "satisficing": one need only find a "good enough" solution to a problem, especially when it is nearly impossible, for practical or theoretical reasons, to find the optimal or correct solution (if it exists at all). There are many different ways Carnap could formalize the "ill-structured" problems concerning probability and induction in the sciences and each different inductive logic provides us with a different, more or less satisfactory way of formulating a "well-structured" problem using the instruments of logical syntax and semantics. The resulting conception of Carnapian logic of science is that of conceptual engineering. Rather than attempting to formalize the probabilistic structure of science in a one fell swoop, Carnap would rather have us design and construct inductive logics in a piece-meal fashion while remaining sensitive to the purposes and changing needs of the empirical sciences. Carnapian logic of science is here understood not as a competitor to "naturalism" or the history of science but simply as an additional method available to philosophers of science in their toolbox of conceptual resources. More specifically, in chapter 2, I introduce the reader to the main topics of this dissertation: Carnap's Wissenschaftslogik, the current Carnap reappraisal literature and the historical context for Carnap's work on inductive logic. I explain Carnap's distinction between pure and applied logic by analogy with mathematical and physical geometry. I then discuss both Carnap's attitude of logical tolerance and how he envisages the replacement of traditional metaphysical and epistemological questions with the logic of science. Then I explain how a number of Carnap scholars – including Richard Creath, Michael Friedman, André Carus, Alan Richardson and Samuel Hillier – have articulated different conceptions of Carnap as engineer. Lastly, I discuss how those mathematicians and scientists working on probability and induction who most influenced Carnap themselves understood the task of providing an inductive logic, or a logical concept of probability; specifically, I provide a quick exegesis of the work on probability and induction by Harold Jeffreys, John Maynard Keynes and Frank P. Ramsey. I then discuss how Carnap's later work on inductive logic marks an important transition from his earlier Wissenschaftslogik for which induction was understood as a purely pragmatic matter which resists formalization into logical syntax. Next, in chapter 3, I examine several engineering case studies and isolate several general principles of engineering design. Specifically, I elaborate on one view in the history of engineering which notices that engineering design depends on a hierarchical distinction between the practical 3 Chapter 1. Introduction and theoretical. I argue that engineering is not a simple instantiation of instrumental reasoning for which engineers simply supply a number of designs to an employer who in turns picks whichever design best fits their needs. Instead, sometimes engineering designs are hierarchical: as the aim of an engineering project and the relevant technologies change, or problems are encountered with the actual construction of an engineering object, the different components of a design will be altered and these changes may influence the engineer's practical choices regarding the other components of the overall design. After discussing Simon's views on satisficing I then discuss how this hierarchical notion of engineering design bears on the analogy of philosophy as conceptual engineering: although the decision to adopt a linguistic framework may be a purely practical matter, when we need to make logical modifications to that framework the decision as to which modifications to make may still be practical but yet these decisions may need to be informed by certain theoretical considerations (e.g., how this logic is to be used in some empirical investigation). In next chapter, chapter 4, I finally turn to Carnap's work on inductive logic. I explain how Carnap constructs inductive logics – or rather, pure inductive logics. I examine how, first, Carnap constructs various inductive logics based on a concept of degree of confirmation and then, second, how he extends this project to construct his λ-system which parameterizes a continuum of inductive methods. Indeed, it is exactly here that Carnap talks about the choice of a value of λ in engineering terms. I then discuss how Carnap used his concept of degree of confirmation to define other inductive concepts, most notably, a concept of estimation for use in theoretical statistics and semantic concepts of entropy and information. In the early 1950s, Carnap suggests that his work on the concept of estimation may be used to restructure the foundations of theoretical statistics. It is within this context that Carnap's work on pure inductive logic seems to be answerable, at least to a certain degree, to how well a concept of degree of confirmation can be used to define other inductive concepts which are central to particular empirical sciences. Lastly, I suggest that Carnap's attempt to find "optimal" values of λ can be understood as a kind of engineering activity. This is yet another way in which the practical decision to adopt an inductive logic may be sensitive to the empirical sciences. Finally, in chapter 5, I explain how Carnap applied his work on a pure inductive logic to the sciences – specifically, empirical and rational decision theory – by focusing on how Carnap tried 4 Chapter 1. Introduction to explain to his peers, like the philosopher Hans Reichenbach or the statistician Leonard J. Savage, how exactly the adequacy of an applied inductive logic need not depend on its empirical success. The focus of this chapter will be on how Carnap understands the connection between inductive logic, rational decision theory and empirical decision theory – indeed, we will see that Carnap shows how one could design an inductive logic so that it is adequate for use in rational decision theory. This chapter is historical. We will discuss how Carnap is influenced by F. P. Ramsey's work on a normative decision theory, how Carnap responds to the criticism from both Herbert Feigl and Reichenbach that a logical meaning of probability cannot be a guide in life, how Carnap understands the application of inductive to decision theory as a methodological problem and, finally, how Carnap later responds to criticism from John Lenz and Carl Hempel directed at his unwillingness to talk about the justification of inductive logic. Crucially, Carnap explicitly argues that he is not trying to provide a non-circular justification for inductive methods but instead is concerned with providing an application of inductive logic for those that already accept the validity of inductive reasoning. For Carnap, an interpreted inductive logic supplies us with well defined, non-arbitrary, confirmation values – we can then use these values as a guide for our scientific deliberations. In the final section of chapter 5 I explain how, taken as a whole, the historical episodes discussed earlier in the chapter lead up to Carnap's 1962 paper, "The Aim of Inductive Logic". For it is there that Carnap suggests that we can think of rational decision theory as supplying to an idealized agent a credence or credibility function with which to make rational decisions, functions which are based on certain "requirements of rationality". Crucially, however, Carnap argues that the adequacy of these credence and credibility functions need not be assessed in terms of their actual empirical success but rather in terms of their "reasonableness," i.e., their inductive success for not just the actual state of affairs but for all conceivable state descriptions, for all possible "states of the world" according to a logical system. In this sense, Carnap envisages a transition from empirical decision to rational decision theory, and then from rational decision theory to inductive logic. It is exactly here that we can think of Carnap's conception of rational decision theory as a kind of "boundary space" between empirical decision theory, which is a fairly straightforward empirical investigation, and pure inductive logic. The interplay between the empirical and logical within this boundary space exemplifies, I suggest, a kind of 5 Chapter 1. Introduction conceptual engineering: Carnap shows us how to construct inductive logics in a hierarchical, piece-meal fashion which have been designed for rational decision theory via certain requirements of rationality – requirements which, in turn, are sensitive to the empirical findings of empirical decision theory. In the conclusion, chapter 6, I explain how we can use the notion of a hierarchical conception of engineering design from chapter 3 to frame Carnap's construction of pure inductive logics – logics which have been designed to clarify the conceptual systems belonging to the sciences, especially theoretical statistics – that we saw in chapter 4 and to understand how rational decision theory provides Carnap with a kind of conceptual space to design inductive logic for empirical decision theory. I then discuss several weaknesses of my dissertation and how it will lead to future work.2 For example, I plan to compare the similar ways in which both Carnap and his student, Richard C. Jeffrey, treat the probability calculus as an instrument.3 I also think there are important connections which have not yet been explored regarding Carnap's place in the history of twentieth-century philosophy of science: Herbert Simon, John von Neumann and Carnap all have projects which I would suggest trade in a common conceptual currency: that scientific reasoning fundamentally works by finding "good enough" rather than the "correct" solutions to problems. 2 For a slightly condensed version of my views, see French (2015b). 3 This work is already underway, see French (2015a). 6 Chapter 2 Carnapian Wissenschaftslogik as Conceptual Engineering The constant course of science is not diverted from its goal by the varying fortunes of metaphysics. It must be possible to gain clarity regarding the direction of this advance, without presupposing the dualism of the metaphysically basic concepts. [. . . ] Are these [basic] concepts, the separation and reunification of which the whole history of philosophy has labored, merely intellectual phantoms, or does a firm meaning and effect in the construction of knowledge remain for them? - Ernst Cassirer, Substanzbegriff und Funktionsbegriff, (1910) The fundamental question my dissertation seeks to answer is how the twentieth century philosopher Rudolf Carnap attempted to clarify and even resolve foundational questions in the sciences by reformulating those questions in an artificial, logical language. What distinguishes his technical projects from any number of other technically-minded projects whose purpose is to somehow logically reconstruct the structure of science is the attitude Carnap takes toward logic. The attitude is this. One may freely choose from an endless stock of artificial languages that artificial language which seems to them more fruitful or useful for clarifying and systematizing the conceptual foundations of science. Logic, for Carnap, is an instrument chosen for reasons of expedience or fruitfulness rather than correctness. Nevertheless, Carnap's reconception of philosophy as the logic of science, or Wissenschaftslogik, may be viewed by some contemporary philosophers as an unnecessary, if not futile, endeavor. For contemporary analytical metaphysicians and epistemologists, for example, questions about the robustly normative nature of knowledge and evidence, or what the structure of the world is actually like, cannot, as Carnap would have it, simply be dissolved by somehow translating these questions into a logical framework. I won't attempt to defend Carnapian logic of science against such contemporary philosophers. Instead, I am interested in illustrating the merits and uses of Carnapian logic of science to those contemporary scientifically-minded philosophers who – even if they remain sympathetic to logical empiricism – would have us focus 7 Chapter 2. Carnapian Wissenschaftslogik as Conceptual Engineering our attention away from the narrow confines of logical reconstruction and toward either the history of science or the science of science.4 In good Carnapian fashion, I do not claim that the logic of science is somehow superior to these contemporary approaches to the philosophy of science but only that, once we understand Carnap's technical work as a kind of conceptual engineering, his logic of science has the potential to be a useful conceptual resource for formallyminded philosophers of science insofar as it provides a means to carry out technical projects without becoming side-tracked by traditional metaphysical and epistemological concerns. Thus the primary goal in this dissertation is one of philosophical and historical clarification rather than then providing an argument for the claim that we should (or shouldn't) adopt Carnapian logic of science ourselves. I draw on both Carnap's work on inductive logic in the 1950s and 1960s and various archival materials, including personal correspondence and unpublished manuscripts, to paint a broad, historical, narrative explaining how Carnap tried to clarify and systematize foundational questions about science – especially the foundations of both decision theory and theoretical statistics – by the practical construction and application of inductive logics. To help me in this task I make use of an interpretive framework – that of philosophy as conceptual engineering – to help explain the radical nature of Carnap's mature philosophical method. Consequently, the dissertation has two audiences. The first are those philosophers and historians working on the Carnap reappraisal literature: my dissertation is the first extensive treatment of Carnap's work on inductive logic and it provides yet another refinement to our understanding of Carnapian logic of science.5 The second are those contemporary philosophers of science for whom the idea that philosophy is conceptual engineering may prove to be a useful framework to situate and motivate their own technical projects. For the rest of this chapter, I proceed as follows. I first explain Carnap's Wissenschaftslogik, in part by explaining Carnap's distinction between pure and applied logic by analogy with mathematical and physical geometry. Second, I provide several examples from the Carnap 4 For the "received view" of how logical empiricism investigates the nature of science, see Suppe (1977) or van Fraassen (1980). For more on the importance of the history of science for the philosophy of science, see Burian (1977); Giere (1973), Kuhn (1962) and the articles in Domski and Dickson (2010). For more on a "naturalist" philosophy of science and criticisms of logical empiricism, see Giere (1985; 1988); Kitcher (1992; 1993); Laudan (1996). 5 Patrick Maher, however, has spent much effort attempting to explain Carnap's inductive logic in terms of Carnap's method of explication; see Maher (2010) and the references therein. Also see Uebel (2012a); Wagner (2011). 8 2.1. Carnap's Wissenschaftslogik reappraisal literature for how Carnap could possibly address criticisms which have been directed against his mature philosophical views by philosophers like W.v.O. Quine and Kurt Gödel. Third, I explain how various authors from the reappraisal literature have attempted to clarify Carnap's mature views by analogizing Carnap's technical projects to a kind of engineering. Lastly, I quickly summarize the work by those mathematicians and scientists on induction and probability who most influence Carnap's own understanding of the problem space for a logical meaning of probability and induction, setting the historical context for chapters 4 and 5. 2.1 Carnap's Wissenschaftslogik When Carnap scholars first compared Carnapian logic of science to a kind of conceptual engineering,6 it was done so in an effort to explain the philosophical differences between Carnap and Quine on the subject of analyticity. Neither the debate between Carnap and Quine nor the subject of analyticity plays a prominent role in this dissertation. Nevertheless, a quick discussion of the disagreement between these two scientific philosophers may help illustrate to the reader why an engineering analogy is relevant at all. According to one conception of scientific inquiry, mathematics is the language of science in the sense that scientific laws can be mathematically derived from a basic system of axioms; Newtonian mechanics, for example, can be derived from Newton's three laws plus both the law of universal gravitation and the resources of the infinitesimal calculus. Call such a system S. One ambition of logical empiricism, as the movement is commonly understood, is to give some general theory of the language of science according to which one could clearly separate those sentences of S which are true purely in virtue of the logical or mathematical consequences of S from those sentences of S which are true in virtue of the empirical axioms of S – axioms which are in correspondence to certain facts of the world. To accomplish this task for not only S but for any scientific theory would be to clearly demarcate the analytic sentences of pure logic and mathematics from the synthetic, empirical, sentences making up the content of the empirical sciences (and if traditional metaphysical statements are shown to be neither analytic or synthetic they are revealed to be, literally, without meaning). 6 For example, in Creath (1992, 154). 9 2.1. Carnap's Wissenschaftslogik However, Quine famously argues that any such foundationalist epistemology is untenable.7 Even if we did manage to agree that an adequate criterion for demarcating the analytic and synthetic sentences of S could be found, no general distinction between the analytic and synthetic could possibly be given for all languages. For in order to characterize the linguistic structure of any language L (called the "object language") which is expressively more powerful than the language of arithmetic with only addition, one must take advantage of the linguistic resources of a separate language (called the "metalanguage"), e.g., a natural language like English, which is stronger than L to define a truth predicate in L over all the sentences in L. But herein lies the problem: how do we know that this procedure for defining an analytic/synthetic distinction in the object language could also be used for defining a similar distinction in the metalanguage (and if it holds for the metalanguage, that it also holds for the meta-metalanguage, etc.)? Quine's solution is that there can be no non-circular and principled distinction between the analytic and synthetic. Instead, we must countenance a holistic and non-foundational conception of meaning: meaning, to use the famous metaphor from Quine, is best described as a web of beliefs with the more "analytic" statements located at the center of the web and the "synthetic" statements located at the edges where they are constantly impinged by "experience"; here the distinction between the analytic and synthetic is one of degrees.8 Carnap, in his 1937 The Logical Syntax of Language (LSL),9 provides a general theory of logical syntax and, later in the 1940s, a semantical definition for truth in a language, or L-truth (we will return to the notions of logical syntax and semantics below). Despite Carnap's complicated logical constructions, as far as Quine is concerned, Carnap provides us with no reason for why his characterizations of logical truth, or analyticity, for certain classes of artificial languages could possibly hold for natural languages like English. From Carnap's point of view in the 1940s, however, he is quite clear that he is only providing a clarification of analyticity as L-truth relative to a particular logical language; or, to use the language Carnap later adopts, 7 See, for example, Quine (1951; 1969). 8 Interestingly, as far as Quine's criticism of Carnap's empiricism is concerned, the historian Joel Isaac has recently pointed out that not only does Quine adopt the talk of "conceptual schemes" from the Harvard biochemist Lawerence Henderson (who led the influential seminar "Pareto and Methods of Scientific Investigation" at Harvard; see Isaac, 2012, 282) but that Quine first introduced his web of belief metaphor while he was still a graduate student in a student paper on Kant for a seminar taught by C. I. Lewis in 1931 (Isaac, 2012, 140-142). So it seems like Quine had a nascent version of his criticisms of empiricism given in his "Two Dogmas" paper already in 1931 – before Quine ventured to Prague to first meet Carnap. 9 This is the expanded, English translation of his 1934 Logische Syntax der Sprache. 10 2.1. Carnap's Wissenschaftslogik L-truth is only an explication of analyticity. Carnap stipulates beforehand which terms in the language are logical and which are descriptive, or non-logical: the logical truths are those sentences formed just from logical terms which can shown to be true using the semantic resources of the language alone. Thus Carnap is clear that he is not attempting to provide a characterization of analyticity for natural languages but only specially constructed logical languages. So here the disagreement between Carnap and Quine seems to rest on a misunderstanding. Quine is unhappy with the arbitrary nature in which Carnap demarcates the logical from the non-logical terms of a language whereas Carnap would suggest to Quine that if he is unhappy with Carnap's definition of L-truth relative to the language L then perhaps Quine should provide to Carnap what Quine would consider a more satisfactory explication of analyticity. But if this is accurate, in what sense is Carnap still doing philosophy if he is no longer engaged, as Quine thought he was, in the project of providing a general analytic/synthetic distinction for any language? This is where the engineering analogy comes in: at least by the 1950s, Carnap was not in the business of providing a general characterization of the analytic/synthetic distinction which is "correct" for all languages; he just showed how this distinction could possibly be clarified. This is conceptual engineering: Carnap constructs languages as one would construct a hammer or cellular phone with some purpose in mind. However, just as we do not have to justify the choice to use one kind of hammer over others to fulfill the task at hand, Carnap needn't justify his choice of a language to Quine: some choices will lead to better or worse results than others, but no instrument is "correct." But even as a conceptual engineer, Carnap is still working on the foundations of science in the sense that he is engaged in the project of providing different, technical clarifications for how one could possibly make sense of the meaning of the analytic/synthetic distinction for specific classes of artificial languages. And perhaps if we search long enough, a clarification, or rather an explication, of analyticity will be found that appeases Quine.10 In the next subsection we turn to Carnap's views on logical syntax and semantics. Logical Syntax and Semantics. Distinctive to Carnap's mature philosophical position is his adoption of a standpoint according to which questions about the foundations of science can be resolved by investigating the language 10 For more on the Quine/Carnap debate and their correspondence, see Creath (1987; 1990a; 1991). 11 2.1. Carnap's Wissenschaftslogik of science. For Carnap, there is a certain degree of freedom available to us when choosing an artificial, logical language to translate the statements made by scientists when they are engaged in scientific activity. The occasion for this linguistic freedom is Carnap's embrace of an attitude of logical tolerance according to which there is no privileged or correct logical calculus that must be used to logically reconstruct scientific language. Consequently, from the perspective of Carnapian logic of science, traditional metaphysical questions are not questions about the "correctness" of scientific language but instead are revealed to be endorsements of one artificial language over others and thus are without cognitive content.11 Likewise, traditional epistemological questions about the "correct" or "rational" formation and justification of our beliefs are revealed, again from the perspective of Carnapian logic of science, to tend to confuse logical with empirical questions.12 Thus, for Carnap, those traditional metaphysical and epistemological questions which have embroiled previous generations of philosophers are to be replaced by the practical activities of the wissenschaftslogiker required to construct artificial languages as instruments for the task of clarifying and systematizing the foundations of science. To see how Carnap thinks he can accomplish all of this it is important to keep in mind that Carnap separates (if only as an abstraction) the study of language into three separate parts: (1) a theory of how the speakers of a language utter or write down sentences in particular contexts, called pragmatics; (2) a theory of how certain expressions in the object language designate the objects in some domain of discourse, like the class of red pandas in southwest Asia, called semantics; and (3) a theory for how the symbols or signs of a language can be combined to formulate syntactic expressions, like closed or open sentences, and rules for when certain kinds of classes of expressions logically follow from other kinds of classes of expressions, Carnap calls this kind of investigation logical syntax.13 Besides this three-part separation of 11 Simply speaking, a sentence S is without cognitive content if S is not only neither true nor false but there is no way to evaluate it veridically at all; typically, it is instead said to express a preference or feeling. 12 In short, Carnap rejects any conception of logic which entails any form of psychologism. Here, psychologism is simply the view that, however we conceive of logic, logic somehow affects either what rational persons ought to believe or what they do, in fact, believe. For more on the history of psychologism in both nineteenth century philosophy and psychology, see Kusch (1995). 13 The exposition of Carnap's views in the next couple of sections follows closely Carnap's work after the publication of LSL – that is, when Carnap adopts something like Tarski's method of defining (logical) truth; see Carnap (1939; 1942; 1943) for the details. My presentation of the technical material in this section, moreover, is not always historically accurate – I am much more concerned with Carnap's views on syntax and semantics starting in the late 1940s rather than explaining how he came to have these views during the 1930s and early 1940s. For more on how Carnap understands the difference between logical syntax and semantics, see §39 of Carnap (1942); also see Creath (1990b); Ricketts (1996). See pages 146 and 153 of 12 2.1. Carnap's Wissenschaftslogik language in pragmatics, semantics and logical syntax, Carnap also distinguishes between the language under investigation, the object language (call it L) and the language with which we state the syntactical and semantical rules for L called the metalanguage, which is typically a natural language like English or German plus some mathematical resources. Linguistic rules stated in the metalanguage for L are formal in the sense that these rules do not refer to the semantic resources of the metalanguage; for Carnap, at least in the 1930s, what distinguishes the rules which belong to logical syntax as opposed to semantics is that the former are formal while the latter are not. When it comes to logical syntax, there are two kinds of rules: rules of formation and rules of transformation. The rules of formation provide recursive definitions for how all the expressions of L, like sentences, can be formed using just two classes of signs: the logical signs like '(', '⊥', '⊤', '∧', '∃' or 'x1' denoting logical notions like parentheses, tautology, inconsistency, logical connectives, quantification and variables, and the descriptive signs like 'aj ' or 'Blue' designating individual constants and predicates. The rules of transformation state how kinds of expressions, like sentences, can be replaced or transformed into other kinds of expressions: these rules characterize, for example, variable substitution and logical implication. In addition to the inclusion of separate rules of formation and transformation for the semantics of L, the semantical rules of L also include a recursive definition for truth in L. This is a deflationary, semantic notion of truth which uses the semantic resources of the metalanguage to state the exact conditions each sentence in L must satisfy to be true or false relative to L. The semantic rules suffice to provide an interpretation of the logical calculus of L if those rules are sufficient to determine truth criteria for all the well-formed sentences in the calculus. An interpretation is true if, generally speaking, it is the case that both (i) the syntactical and semantical notions of logical implication coincide and (ii) all sentences in the calculus that are (not) provable are true (false) in the semantical system.14 Once a logical syntax and semantics has been given for L, Carnap then defines semantical concepts which either do or do not hold of all the sentences of the language in virtue of the semantical system alone. Here I have in mind logical notions of analyticity and logical consequence whose meaning are fully specified by the semantics of L – Carnap calls such concepts Carnap (1939) for what he means by "abstraction"; also see Sarkar (2013). 14 See §§4-10, and especially pp. 164-65, of Carnap (1939). Famously, the Peano axioms have more than one true interpretation aside from the normal interpretation; see Carnap (1943). 13 2.1. Carnap's Wissenschaftslogik L-concepts. For example, Carnap provides a definition of logical truth, or analyticity, in L as L-truth: a sentence S in L is L-true if the semantic rules alone suffice to show that S is true (in L). Importantly, this means that there may be sentences in the logical calculus which contain descriptive signs which, when interpreted, are neither L-true or L-false; rather, these sentences, according to Carnap, are the factual sentences of L. We may wish, for example, to state rules for defining a semantic truth predicate in L for those factual sentences that contain descriptive signs: for example, we may wish to state rules which coordinate the individual constants in L with the balls in an urn and certain descriptive predicates in L with physical color properties, like having the property of being blue or red (as we will see below, this is one way in which L can be applied). Carnap, however, is not claiming that this notion of L-truth provides a general clarification of analyticity and truth for all languages, both artificial and natural: it is only a notion of analyticity and truth for the language fragment L. According to Carnap, whether or not a notion of L-truth in L is adequate depends on what interpretation we wish to give to the logical syntax of L in the metalanguage.15 The semantical concepts in an object language can only be made precise relative to the antecedent semantic resources available to us in the metalanguage. Pure and Applied Logic, Mathematical and Physical Geometry. The idea that a semantical system can provide an interpretation for a logical calculus is central to Carnap's mature views. Specifically, an interpreted pure logic can be applied in the sense that the primitive descriptive terms in that logic are given an interpretation which coordinates them with empirical objects. In this section I discuss how Carnap explains this distinction between pure and applied logic by reference to the distinction between mathematical and physical geometry. In the late 1930s, Carnap distinguishes the application of an object language L from both the construction of a logical syntax and the provision of a semantical system for the syntax for L, or what Carnap later calls the formalization and an interpretation of a language, respectively.16 An application of an interpreted logical calculus L is either a reinterpretation of the descriptive 15 See, for example, Carnap's remarks about L-concepts in §16 of Carnap (1942). 16 See Carnap (1939). However, when Carnap has in mind axiom system specifically in his 1950 probability book, The Logical Foundations of Probability, the application of an axiom system is the same as its interpretation; see §6, especially pp. 16 and 59 14 2.1. Carnap's Wissenschaftslogik signs of a semantical systems or, more typically, an interpretation of the primitive descriptive signs through the use of semantical rules which make reference to the empirical measurements, observations or experimental results belonging to some scientific mode of inquiry.17 Next we examine how Carnap explains the difference between pure and applied logic by comparing it to the distinction between mathematical and physical geometry.18 Generally speaking, geometry concerns the empirical measurement and comparison of spatial properties and relations using mathematical concepts like "point", "line" and "plane" defined relative to some multi-dimensional mathematical space, like the three-dimensional Cartesian coordinate system R3. Euclid was the first to formalize the mathematical part of geometry as a unique system of axioms and postulates which clearly stated how the concepts like "point" and "line" can be interpreted in terms of what can be drawn on a piece of paper using nothing but a pencil and measuring instruments like a straight-edge and compass. It was only in the nineteenth century that it was discovered that the axioms of geometry can be studied independently of how the concepts like "point" and "line" are interpreted. Specifically, it was discovered that there exist geometrical axiom systems which are consistent but for which Euclid's parallel postulate does not hold.19 Different choices of a set of geometrical axioms generate different geometries, each with their own class of mathematical theorems, conjectures and conventions. Mathematical geometry is concerned with studying what mathematical consequences hold for different geometrical axiom systems. Specifically, different systems of axioms, e.g., those systems corresponding to Euclidean or non-Euclidean geometry, specify different extensional relationships for the primitive geometrical signs like "point" and "line". Crucially, however, the mathematical consequences of these different axiom systems do not on their own have anything 17 This is sometimes done via coordinative definitions, or Zuordnungsdefinitionen; see Reichenbach (1920). Moreover, as we will see in chapter 4, Carnap later talks about the practical application of a semantical system – especially a system with logical concepts of probability – via the imposition of certain requirements restricting the possible interpretation of that semantical system; for Carnap, such requirements concern the methodology of the semantical system and do not belong to the semantics of the logic itself. For example, see §44 of Carnap (1962b), especially p. 204. 18 The subject of physical and mathematical geometry plays a prominent role in Carnap's views beginning with his 1922 dissertation, Carnap (1922); indeed, Carnap uses this distinction to explain the difference between pure and applied logic in one of his last (posthumously) published writings, see §4 of Carnap (1971a). 19 For Euclidean geometry the shortest distance between any two points in R3 ⟨x1, y1, z1⟩ and ⟨x2, y2, z2⟩ is equal to the magnitude √ (x2 − x1)2 + (y2 − y1)2 + (z2 − z1)2. In non-Euclidean geometries, however, this distance formula does not provide the correct magnitude for the shortest distance between two points, or geodesics. We would instead have to appeal to the more general mathematical conception of a metric space and then use this notion of a metric to calculate the distance between two points located on, say, a smooth manifold with an affine structure. For more details, see Carnap (1995, Part III). 15 2.1. Carnap's Wissenschaftslogik to do with the physical world: they belong to the realm of pure mathematics. Physical geometry, by contrast, provides clear rules for how the primitive signs of an axiom system should correspond to physical locations, objects and relations; for example, to the points and lines in a general theory of space and time, or as longitudinal and latitudinal locations required for nautical navigation. In other words, physical geometry is the application of a mathematical geometrical system in the sense that coordination rules are given which specify how the primitive descriptive signs of an axiom system correspond to specific classes of physical objects or properties.20 In LSL, Carnap draws on this distinction between mathematical and physical geometry to explain the difference between pure and descriptive logical syntax. Pure syntax, according to Carnap, is "nothing more than combinatorial analysis, or, in other words, the geometry of finite, discrete, serial structures of a particular kind" (LSL, 7; emphasis in original).21 Just as different mathematical geometrical systems can be constructed by investigating the mathematical consequences of different axiom systems with primitive descriptive signs, pure syntax, for Carnap, studies the countlessly many different ways in which a logical calculus, with primitive descriptive signs, can be constructed by choosing different rules of formation and transformation. Moreover, just as one may investigate different geometrical axiom systems from a purely mathematical point of view (and so independently of the possible applications of those geometries to specific scientific and engineering endeavors) Carnap invites us to treat the purely syntactic investigation of the possible kinds of logical form in a similar fashion to the investigation of mathematical geometry, namely, as a mathematical activity independent of how logical calculi could possibly be interpreted and applied in order to express logical and empirical statements. Carnap's envisaged plurality of logical forms – including especially those heterodox logical calculi which are either non-bivalent or intensional – is explained neither as the consequence of any philosophical account of knowledge, reason or conception of the world but rather as the result of the adoption of an attitude or standpoint which he takes toward both mathematics and logic. Carnap expresses this attitude with a principle of logical tolerance: "It is not our business to set up prohibitions, but to arrive at conventions" (p. 51).22 "In logic," explains Carnap, "there 20 For a more detailed explanation of how these constructions might go, see Carnap (1939). 21 What Carnap seems to have in mind is this: pure logical syntax is the spatial investigation of how symbols like '∃', ')' and '∧' can be made to form finite, discrete and serial structures, like sentences and open formula. 22 As Carnap points out in LSL, he was not the first to formulate such a view. Karl Menger, in 1930, was the first to express this attitude in writing; see Menger (1979, 11–16). 16 2.1. Carnap's Wissenschaftslogik are no morals": Everyone is at liberty to build up his own logic, i.e., his own form of language, as he wishes. All that is required of him is that, if he wishes to discuss it, he must state his methods clearly, and give syntactical rules instead of philosophical arguments. (LSL, 52) No longer "hampered by the striving after 'correctness' " – that is, hampered by philosophical arguments and projects concerned with the ontological commitments of the language we employ – we are free to investigate the syntactic properties of different logical calculi and then choose that calculus which we have good reason to think is a fruitful candidate for structuring or framing scientific language. Simply put, Carnap rejects any foundational project which seeks to locate the ontological or otherwise metaphysical consequences of adopting one logical calculus over others. Carnap instead pictures an endless oceanscape of possible language forms ripe for philosophical exploration: "before us," says Carnap, "lies the boundless ocean of unlimited possibilities" (LSL, xv). We are free to explore whichever pure logical systems we wish. By contrast, descriptive syntax, like physical geometry, is concerned with providing an interpretation for a logical calculus; in particular, it is concerned with applying the primitive descriptive terms in a language through a judicious process of selecting the right coordinative definitions which resemble the standard interpretation of some historical language already in use.23 Thus, although we were initially free to construct whichever kind of logical calculi met our fancy, once we made the decision to interpret and apply that calculus in such a way that it is intended by us to capture the structure of some part of the exact sciences, e.g., particular theories in classical population genetics or particle physics, then we are no longer free to provide just any old semantical interpretation of the calculus. We would instead be in the business of figuring out exactly how to construct a semantical system for our calculus which provides 23 In Carnap (1939), Carnap distinguishes between two different ways of constructing both logical calculi and semantical systems. The first way is what Carnap calls descriptive and it concerns theoretical investigations of the linguistic properties of historically given languages. For example, we may wish to explicitly provide an interpreted calculus for that snippet of English which corresponds to how scientists record their experimental results as declarative sentences. For Carnap, the question of how exactly an interpreted logical calculus should be constructed so that it captures, loosely speaking, the logical structure of the language snippet is an empirical question best answered by an appeal to empirical linguistics and psychology rather than more logic and philosophy. Alternatively, we could also construct an uninterpreted logical calculus or a semantical system from scratch, so to speak, by freely choosing whichever rules of formulation and transformation (including rules of truth for semantical systems if required) one wishes without any pretense that this system is intended to resemble in any way any actual language-in-use. It is in this sense that logic, for Carnap, can be conventional: the question about how we should construct a logical syntax is not theoretical but it is rather a matter of preference and expedience relative to what we wish to accomplish with this calculus – after all, logical syntax is just the mathematics of how to combine together symbols which we call logical and descriptive signs (see §11 of Carnap 1939). 17 2.1. Carnap's Wissenschaftslogik definitions for the primitive descriptive signs in our language that are sensitive to the modeling, experimental and theoretical activities of biologists or physicists.24 According to Carnap, foundational questions about science which most contemporary analytical philosophers would without much hesitation label as "metaphysical" or "epistemological" tend to confuse logical and empirical (most likely psychological) matters granted that we already have in our possession, so to speak, an adequate logical syntax and semantics for that language. The metaphysical question "Do neutrinos really exist," for example, seems to ask a question about the world; namely, whether neutrinos really do exist or not. Carnap, however, does not assume that the notions of existence, logical truth or reference taken from natural language are "correct." Instead, when asked the question whether neutrinos actually exist Carnap would diagnose that question as either a question about the logical form of our language (e.g., are the syntactic and semantic descriptive signs used to formalize and interpret the concept of a neutrino primitive or are they further reducible to expressions which contain other non-logical signs?) or as an empirical question which can be answered using the syntactical and semantical resources of that language (e.g., whether certain classes of factual sentences which contain those descriptive signs designating neutrinos are true or false). In the first case, the existence question becomes, for Carnap, a practical question about which kind of logical form is most expedient or useful for axiomatizing physical theories and, in the second case, the existence question becomes a theoretical question about what can be asserted using a language system which already has well-defined syntactical and semantical rules.25 In other words, the metaphysical question itself is transformed into either a question about which pure logic we prefer to use or, once we have chosen a logic, what can be expressed, as a theorem in that logic, once it has been applied to some empirical situation. 24 In other words, our syntactic conventions have to be put to empirical use: "In principle, certainly, a proposed new syntactical formulation of any particular point of the language of science is a convention, i.e. a matter of free choice. But such a convention can only be useful and productive in practice if it has regard to the available empirical findings of scientific investigation" (Carnap LSL, 332). 25 See Carnap (1950), where he introduces the nomenclature of internal and external questions to help explain this distinction; also see my chapter 4, pp. 112 ff. 18 2.1. Carnap's Wissenschaftslogik Carnap's Logic of Science. Now that we have a way to distinguish between pure and applied logic under our belt we can explain Carnap's logic of science in a bit more detail. In part V of LSL, Carnap remarks that the questions of any theoretical field, like biology or sociology, can be expressed as either 'object' or 'logical' questions, i.e., questions concerning the objects of the domain of a field, like a population of Drosophilia in a biology lab, or questions about logical syntax of a scientific language, like the logical syntax of the language used by evolutionary biologists when talking about fruit flies, respectively. Carnap is quite aware, of course, that from the perspective of logical syntax this distinction between 'object' and 'logical' questions is at best informal: 'object' questions are, for Carnap, really just 'logical' questions answerable by examining the logical syntax (and, later, the semantics) of that language.26 Nevertheless, according to Carnap, once we investigate the logical syntax of traditional metaphysical or axiological philosophical questions formulated in natural language, like English, we discover that these questions belong neither to the 'object'questions of some scientific field nor are they 'logical'-questions about the logical syntax and semantics of a language. They are instead pseudo-sentences: "they have no logical content, but are only expressions of feeling which in their turn stimulate feeling with volitional tendencies on the part of the hearer" (LSL, 278). "Apart from the questions of the individual sciences," says Carnap, only the questions of the logical analysis of science, of its sentences, terms, concepts, theories, etc., are left as genuine scientific questions. We shall call this complex of questions the logic of science. (LSL, 279) Next Carnap introduces the distinction between the material and formal mode of speech, a distinction with which he means to capture the difference between our customary ways of speaking when carrying out everyday, philosophical or scientific activities and the close examination of language which can be garnered from the spelling out of the logical syntax of a language. For Carnap, it is only when traditional philosophical problems – problems which are typically framed informally in ordinary language, or what Carnap calls the material mode of speech – are translated into logical syntax, viz. what Carnap calls the formal mode of speech, is it possible for us to see why traditional philosophical questions are pseudo-questions.27 By translating the 26 Presumably, the distinction between the 'object' and 'metalanguage' would, for Carnap, likewise be informal. 27 As Carnap himself puts the point: "Translatability into the formal mode of speech constitutes the touchstone for all philosophical sentences, or, more generally, for all sentences which do not belong to the language of 19 2.1. Carnap's Wissenschaftslogik informal sentences scientists make when carrying out scientific activities into logical syntax, or the formal mode of speech, a logician can pinpoint the exact logical relationships between scientific concepts and terms contained in these informal sentences. For example, by focusing on the logic of science, philosophers can ask questions about the inter-definability and translatability of the terms in one language, say the language of evolutionary biology, into the language of another language, like the language of physics. Nevertheless, what some contemporary philosophers may not realize is that Carnap never claimed that the point of logical syntax was to formalize all scientific activities and processes within a single logical framework. The reasoning processes scientists go through in order to make their judgments about the success of experiments, how experiments are performed or how to evaluate the confirmability of theories given evidence may be left as pragmatic notions, i.e., they make use of concepts which refer to actual persons at a particular place in time. In Carnap (1936a; 1937a), for example, the notions of "testability" and "confirmability" are defined pragmatically, i.e., in reference to what actual scientists do when they employ these terms in scientific contexts. From the standpoint of logical syntax and semantics, "testability" and "confirmability" are then treated as primitive concepts which can then be used to define a plethora of scientific concepts.28 This fact will be of relevance throughout this dissertation: it was never the aim of Carnap, in LSL, to fully formalize inductive reasoning into a single logical framework and it is only in his later work on inductive logic that he begins to formalize fragments of the kind of inductive and probabilistic reasoning used by scientists. In LSL, for example, Carnap formalizes only the declarative sentences stated by scientists using the resources of logical syntax and he leaves any formalization of how scientific theories change over time, including the introduction of new theories in a logical syntax, to the methodology and pragmatics of scientific activity. He outlines how a logician could go about providing a logical syntax for the language of physics in §82 of LSL: aside from providing a logical calculus with semantic L-rules, the logician would also introduce both primitive descriptive signs – including how to formulate protocol sentences in the language – and semantic P-rules, or primitive physical rules, which characterize the basic physical laws of theoretical physics using anyone of the empirical sciences" (LSL 313; emphasis in original). 28 Carnap, for example, defines different notions of reducibility in terms of these pragmatic notions of 'testability' and 'confirmability' in Carnap (1936a; 1937a). 20 2.1. Carnap's Wissenschaftslogik newly introduced descriptive signs.29 Carnap is quite clear, however, that the P-rules do not formalize how new P-rules should be introduced into the physical system nor how the P-rules already in the system can be altered as new protocol sentences corresponding to new observational statements are introduced into the system. Instead, it is the logician or scientist who must, so to speak, manually introduce, modify or remove the P-rules of the system – instead of being inferred, P-rules, according to Carnap, are to be treated as hypotheses relative to a body of protocol sentences in the language (318). These hypothetical P-rules are never in a strict sense either completely falsified nor fully confirmed : When an increasing number of L-consequences of the hypothesis agree with the already acknowledged protocol-sentences, then the hypothesis is increasingly confirmed; there is accordingly only a gradual increasing, but never a final, confirmation. Furthermore, it is, in general, impossible to test even a single hypothetical sentences. [. . . ] Thus the test applies, at bottom, not to a single hypothesis but to the whole system of physics as a system of hypotheses (Duhem, Poincaré). (LSL, 318; emphasis in original) In the 1930s, Carnap does not attempt to define a syntactic (or semantic) concept of "testability" or "confirmation" within the logical syntax of the physical language itself but instead treats these notions at the level of pragmatics: they concern how actual scientists or logicians come to evaluate whether a hypotheses is testable or confirmable relative to some body of scientific evidence.30 But no Lor P-rules are sacred – any of these rules may at some latter point be revised or altered: No rule of the physical language is definitive; all rules are laid down with the reservation that they may be altered as soon as it is expedient to do so. This applies not only to the P-rules but also the L-rules, including those of mathematics. In this respect, there are only differences in degree; certain rules are more difficult to renounce than others. (LSL, 318) Indeed, according to Carnap, within the context of the logic of science our practical decisions regarding the choice of a logical syntax are to be made on the basis of "practical methodological considerations": The construction of the physical system is not effected in accordance with fixed rules, but by means of conventions. These conventions, namely, the rules of formation, the L-rules, and the P-rules (hypotheses), are, however, not arbitrary. The choice of them is influenced, in the first place, by certain practical methodological considerations (for instance, whether they make for simplicity, expedience, and fruitfulness in certain tasks). (LSL, 320) 29 My labeling of the P-rules and L-rules as "semantic" is both anachronistic and slightly misleading: for Carnap, the P-rules are the rules of the language which are not L-rules – there need not be a tight correspondence between "physical" and P-rules. 30 See, for example, Carnap (1936a;b; 1937a). 21 2.2. Wissenschaftslogik: Critiques and Reappraisals The L-rules and P-rules of a logical syntax for the language of physics are not provided to us as a consequence of either accepting some a priori realm of reasons or the existence of some notion of transcendental agency or any docile deity but rather as a consequence of scientists and logicians making "practical methodological considerations" on the basis of their scientific expertise and knowledge. They will then modify these L-rules and P-rules to the extent which they find the current rules to be simple, expedient and fruitful. Thus Carnap provides us with no notion of a "meta-logic-of-science": no rules for how scientists or logicians should modify the L-rules and P-rules of a language of physics. This too is a practical matter, but it is a practical matter informed by the projects and concerns of working scientists and logicians. This, in a nutshell, is Carnap's response to Quine: he, Carnap, is not in the business of providing the correct theory of analyticity but only a characterization of analyticity relative to some language which will suit our scientific purposes. Carnap sees himself as offering to Quine different ways of applying a logical system, just as a mathematician could offer to a scientist different geometrical axioms systems. This is where conceptual engineering as an interpretive framework starts to do work. Carnap, as conceptual engineer, shows how the philosopher can contribute to foundational debates in the sciences: instead of appealing to subjects like psychology, e.g., under the rubric of "naturalized epistemology,"31 to examine how scientists have used certain mathematical instruments produced throughout the history of science in their scientific reasoning Carnap instead shows us how we could construct these tools from scratch. 2.2 Wissenschaftslogik : Critiques and Reappraisals The image of Carnapian Wissenschaftslogik adumbrated in the last section may not be similar to the image of logical empiricism many analytical philosophers are familiar with. In "Two Dogmas of Empiricism" and his other writings, for example, Quine suggests that Carnap's use of symbolic logic to investigate the foundations of science in the Aufbau should be understood as a continuation of British epistemology as Carnap, purportedly, tries to make good on Russell's attempt to use symbolic logic to rationally reconstruct the empirical world from sense data alone. 31 See Quine (1969). 22 2.2. Wissenschaftslogik: Critiques and Reappraisals Famously, of course, Quine argues that Carnap's foundational epistemology fails in one of two ways. We have already encountered the first way, that Carnap cannot adequately characterize analyticity in terms of L-truth. The second failure is that Carnap provides us with no reason to think that complicated, theoretical concepts, e.g, concepts from relativistic space-time theory, can be univocally logically reconstructed on the basis of observational concepts alone. In either case, Carnap, according to Quine, is engaged in an untenable foundationalist project. As an alternative, Quine suggests that we instead adopt a non-foundational and holistic approach to the foundations of science, an approach which does not countenance a clear separation between artificial and natural languages but instead draws on the conceptual resources from empirical psychology to inform our epistemological projects. Another worry about Carnap's logic of science is that it is, quite literally, on the wrong side of history. In his 1962 The Structure of Scientific Theories (SSR), Thomas S. Kuhn had a permanent influence on the way historians and philosophers study science and its history. Rather than adopting a view about the history of science which tracks the logical structure of scientific theories as they progressively get closer to the truth, Kuhn investigates the material history of how scientists are trained to do science using a specific set of assumptions, scientific concepts and techniques, or a "paradigm", and finds that, at least for cases of scientific revolutions, scientific communities do not smoothly transition from older to newer paradigms. The central insight is that there is no straightforward way to isolate a single notion of progress defined over changes in scientific theories within scientific communities. For post-Kuhnian philosophers of science, Carnap's logic of science is seen as embracing exactly that ahistorical and logical revisionist conception of scientific theories which Kuhn's SSR rejects in favor of a philosophy of science which is invariably intertwined with the history of science. Fortunately, pioneered by scholars like Alberto Coffa, Michael Friedman, Warren Goldfarb and Thomas Ricketts, there now exists a quite extensive Carnap reappraisal literature which attempts to explain Carnap's own philosophical views in his own terms rather through the historical narratives bolstered by Quine or Kuhn. Much of this literature has focused, in particular, on Carnap's philosophy of mathematics, including not only Carnap's influences like Frege, Russell and David Hilbert, but also his later work on metalogic and his principle of logical 23 2.2. Wissenschaftslogik: Critiques and Reappraisals tolerance.32 In contrast to Quine's version of events, we now have much textual and historical evidence that Carnap, in his Aufbau, was not concerned with the foundationalist problem of alleviating Cartesian doubts but rather with the problem discussed by nineteenth century German-speaking Marburg neo-Kantian epistemologists: this is the problem of showing how scientific knowledge, through the activity of rational reconstruction, is objective, viz. as intersubjectively communicable (see Richardson, 1998). Also, rather than Carnap and Quine being indefinitely at loggerheads, we find that not only do they both reject "intuition" or "common sense" as an independent source of knowledge (see Creath, 1991) but that their separate approaches to the philosophy of science are nearly identical aside from a few methodological differences (see Stein, 1992). When it comes to the philosophical differences between Carnap and Kuhn, not only do we find that Carnap was sympathetic to a manuscript of Kuhn's SSR (see Reisch, 1991), there are plenty of similarities between Kuhn's talk of revolutionary/normal science in terms of "paradigm shifts" and Carnap's own talk of making the practical decision to adopt a linguistic framework (see Earman, 1993; Friedman, 2001; Irzik and Grünberg, 1995). Finally, from a historical perspective, the supposed grip the logical empiricists had on North American philosophy around 1950 doesn't quite fit the facts: although it is true that logical empiricism has left a lingering imprint on contemporary philosophy of science, logical empiricism, as a philosophical movement, was far from the dominant movement in post-World War Two North American philosophy (see Creath, 1995; Reisch, 2005; Richardson, 1997a; 2002; 2007).33 Consequently the Carnap reappraisal literature provides us with a subtle and complex account of not only Carnap's Wissenschaftslogik but of logical empiricism in general. At the end of the previous section, for example, we found that in LSL Carnap does, loosely speaking, embrace some sort of holism for scientific concepts while simultaneously rejecting any foundationalist reading of his logic of science. And it is not as if Carnap leaves no room for sociological and historical investigations about the nature of science provided, of course, that we recognize that such investigations belong to the methodology or pragmatics of science and not the logic of sci32 See, for example, Awodey and Carus (2007); Carus (2007); Coffa (1991); Creath (1992; 1996; 2003); Friedman (1999; 2001); Friedman and Creath (2007); Frost-Arnold (2013); Giere and Richardson (1996); Goldfarb and Ricketts (1992); Hardcastle and Richardson (2003); Reck (2013); Richardson (1994; 1996; 1997b; 2004); Ricketts (1994; 1996; 2003); Uebel (2007); Uebel and Richardson (2007); Wagner (2009; 2012). 33 For more of the sociological and larger historical perspective of the Vienna circle, see Cartwright et al. (1996); Stadler (2001); Uebel (2007; 2012b). 24 2.2. Wissenschaftslogik: Critiques and Reappraisals ence – in later chapters, we will even see that the history of probability theory and statistics does in fact inform Carnap's work on inductive logic. Nevertheless, despite these interpretive efforts to clarify Carnap's mature philosophical project, we may still have lingering doubts about the adequacy of turning to logical machinery, like logical syntax, in order to answer philosophical questions. These worries are to be taken seriously. Far too often in contemporary philosophical discourse genuine philosophical questions are seemingly hijacked by irrelevant technical details and problems. For example, in a paper originally intended for, but never published in, Carnap's Schilpp volume, Kurt Gödel attempts to isolate a tension between Carnap's attitude of logical tolerance and the application of logical systems to the empirical sciences.34 In what follows I outline Gödel's argument as found in Goldfarb (1996) and Ricketts (1994). First, Gödel notes that, for Carnap, it seems that the logical relations or rules which fix the consequence relations of a language should satisfy the following constraint: that they don't determine the truth or falsity of empirical propositions. For to do so would mean that those relations or rules improperly classify such propositions as "analytic." Gödel calls those logical relations or rules which satisfy the above constraint "admissible." However, if our logical rules really are admissible, by Gödel's second incompleteness theorem, a stronger metalanguage is needed to show that our logical language is consistent.35 However, now it seems like all the important philosophical work has been relocated from the object language to the metalanguage. Carnap cannot now suggest that the decisions to adopt the rules of formation and transformation for the language are purely practical as such decisions must now be informed by whether or not those rules are admissible. But now Carnap's appeal to logical syntax does little to ameliorate Gödel's concern about whether the rules of transformation are admissible – isn't this problem now best left to a logical analysis in the metalanguage, especially natural languages like English? As Ricketts (1994) points out, Gödel seems to presuppose that while the truth of analytic sentences is determined by the logical rules of the language, the truth of empirical sentences is determined, in some sense, by the world. In other words, "Gödel's definition of admissibility," says Ricketts, "employs a language-transcendent notion of empirical fact or empirical truth" 34 See Gödel (1953); Goldfarb (1995). 35 At least this is the case for sufficiently strong object languages. 25 2.2. Wissenschaftslogik: Critiques and Reappraisals (180). Yet according to Ricketts, "Carnap, in adopting the principle of tolerance, rejects any such language-transcendent notions" (1994, 180). This is indicative of the philosophically radical nature of Carnap's views on the foundations of logic and mathematics and the application of logic and mathematics to the foundations of science. Given an attitude of logical tolerance, we are free to investigate (and here I adopt a spatial metaphor) a space of alternative logical forms or rules without presupposing that there are any antecedently given, well-defined, notions of "fact", "verifiable" or "confirmable" according to which a logical relation or rule could be evaluated as admissible. Of course, as Ricketts clarifies, Carnap can appeal to the standards and methodology of science in order to articulate what Gödel may have in mind by "admissibility". But Carnap does not take such standards for granted; instead, Carnap understands his commitment to empiricism in a way similar to his commitment to tolerance. Neither is an assertion; rather, both are proposals. Thus Carnap's commitment to empiricism is to be understood as the adoption of a particular attitude; namely, that our current scientific language provides us with the standards of rational inquiry and empirical significance. In adopting a principle of empiricism, Carnap can appeal to empirical standards of our current scientific theories in order to better inform our practical choices about which logical system will be satisfactory. Consequently, Carnap can only understand Gödel's concerns about whether our logical system is admissible after one has made the practical decision to embrace an empiricist attitude or stance – otherwise Carnap can at best only make informal sense of Gödel's attempt to characterize a notion of admissibility, or some other notion of "adequacy," relative to the empirical world. Whatever we may think of Gödel's argument and Ricketts's rendition of how Carnap could possibly respond to it, we now have a better sense of what is so revolutionary about Carnap's mature philosophical views. In contradistinction to philosophical methods, like conceptual analysis, which purportedly allow philosophers to "discover" the meaning of concepts or to obtain access to some realm of propositional facts in light of our intuition or a priori reason, Carnap's mature views emphasize the conventional, volitional and constructivist activities involved in investigating the foundations of science. Rather than answer philosophical questions about the nature of logic and mathematics by arguing that it is the case that X, Carnap, quite characteristically, instead constructs a language which contains the syntactical and semantical resources 26 2.2. Wissenschaftslogik: Critiques and Reappraisals to express a question like X – but he never claims that his own logical reconstruction of X is logically, empirically or conceptually identical to X. But what is particularly philosophical about that?36 In a sense, the rest of the dissertation draws on the history of philosophy of science to try and provide some explanation using my own account of conceptual engineering (from chapter 3) as an interpretive framework for explaining the philosophical upshot of Carnap's work on a pure inductive logic and his various attempts to explain how that inductive logic can be applied to the empirical sciences, especially the foundations of statistics and decision theory (see my chapters 4 and 5). For the moment I want to discuss Carnap's own attempt to explain his mature views when, in 1945, he adopts the vocabulary of explications instead of Wissenschaftslogik.37 The method of explication, according to Carnap, concerns the "replacement of a pre-scientific, inexact concept (which I call "explicandum") by an exact concept ("explicatum"), which frequently belongs to the scientific language" (1963b, 933). More specifically, the method of explication is, for Carnap, a theory of scientific concept formation based on the historical observation that scientific concepts, after being initially introduced informally, later come to be replaced with more exact qualitative, comparative or quantitative concepts.38 The basic idea is that we first focus on an explicandum in natural language, call it C, the usage of which we agree is vague or inexact and then study the ways in which C is inexact or vague by trying to clarify how it is used in ordinary speech. In which contexts is the term used? In those contexts, if we all agree that it is being used correctly, why is it useful? When is it being misused? This is the clarification step in an explication. After this step is finished, we next adopt some logical system, call it L, which already has well-defined syntactic and semantical rules. We then define, in L, one or more semantical concepts, call them 'C' and 'C†', which are each possible explicata. Lastly, we can then give an interpretation for 'C' and 'C†' in L and then investigate the mathematical properties of these new concepts; 36 As Peter Strawson puts the point in Carnap's Schilpp volume: "For however much or little the constructionist technique is the right means of getting an idea into shape for use in the formal or empirical sciences, it seems prima facie evident that to offer formal explanations of key terms of scientific theories to one who seeks philosophical illumination of essential concepts of non-scientific discourse, is to do something utterly irrelevant – is a sheer misunderstanding, like offering a text-book on physiology to someone who says (with a sigh) that he wished he understood the workings of the human heart" (1963, 504-5). 37 Carnap first introduces this method in Carnap (1945b): it is not a coincidence that this paper is also one of his first published papers on the nature of probability and induction. 38 In general, Carnap talks about this method in the following places (this list is not exhaustive): §§1-6 and chapter IV of Carnap (1962b), Carnap's replies to Peter Strawson in Schilpp (1963) and Carnap (1956). 27 2.2. Wissenschaftslogik: Critiques and Reappraisals if we find these interpretations satisfactory, we can then apply the language L, which now includes the concepts 'C' and 'C†', to some domain of objects. Thus we can then study how each applied explicatum measures up, so to speak, to our expectations regarding the usefulness and exactness of C in particular contexts. Carnap's talk of explication is none other than the process of locating an adequate application of a pure logic. It is crucial to keep in mind, however, that what I call the measure of the "success" for any process of explicating an explicandum with a particular explicatum is, for Carnap, not an all-ornothing affair but is rather a matter of weighing the differing degrees to which the explicatum satisfies a number of practical requirements; namely, the requirements of (i) similarity to the explicandum, (ii) exactness, (iii) fruitfulness and (iv) simplicity (The Logical Foundations of Probability ; hereafter LFP, 7). According to Carnap, the reason why the explicatum should be exact is so that it can be introduced "into a well-connected system of scientific concepts" and a concept is as fruitful insofar as it can be used to formulate "universal statements," like empirical laws or logical theorems (LFP 7). Of all the requirements, simplicity is the least important. Lastly, for Carnap there is no limitation on how many explicata we can design and construct – this is a consequence, it seems, of his attitude of logical tolerance. We will return to the details of Carnap's method of explication in chapter 4. Before we move on, however, it is important to note that the explicit use of a logical system is not always necessary for the provision of an adequate explicatum. As Carnap clarifies his views in response to criticism from Strawson's contribution to Carnap's Schilpp volume, Carnap says that he "[sees] no sharp boundary line but a continuous transition" between "everyday concepts and scientific concepts" (1963b, 934). In contrast to Carnap's method of rational reconstruction in the Aufbau, explications for concepts are not limited to artificial languages but can be carried out in natural language too. But that doesn't mean that artificial languages, like symbolic logic, have no use. "A natural language," Carnap explains,39 is like a crude, primitive pocketknife, very useful for a hundred different purposes. But for specific purposes, special tools are more efficient, e.g., chisels, cutting-machines, and finally the microtome. If we find the pocketknife is too crude for a given purpose and creates defective products, we shall try to discover the cause of the failure, and then either use the knife more skillfully, or replace it for this special purpose by a more suitable tool, or 39 Strawson uses the tool metaphor himself to describe the difference between two philosophical methods, Carnap's method of rational reconstruction and naturalism (here: ordinary language philosophy) (1963, 503). 28 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer even invent a new one. The naturalist's thesis is like saying that by using a special tool we evade the problem of the correct use of the cruder tool. But would anyone criticize the bacteriologist for using a microtome, and assert that he is evading the problem of correctly using a pocketknife? (Carnap, 1963b, 938–9) The working analogy Carnap employs in this passage explores how using logic to study the foundations of science is similar to using a tool or instrument to accomplish some task. In the next section, after discussing how Carnap himself uses this analogy, I discuss a number of philosophers who adopt this engineering analogy to help illuminate Carnap's mature philosophical views. 2.3 Carnapian wissenschaftslogiker as Conceptual Engineer "I admit that the choice of a language suitable for the purposes of physics and mathematics," remarks Carnap in the second edition of his book Meaning and Necessity,40 involves problems quite different from those involved in the choice of a suitable motor for a freight airplane; but, in a sense, both are engineering problems, and I fail to see why metaphysics should enter into the first any more than into the second. (1956, 43) The context for his quotation is a discussion by Carnap regarding Quine's views on ontological commitment. For Quine, questions about ontological commitment boil down to the logical details of how quantification over variables works in a language (42). For Carnap, of course, such questions about the logical form of a language amount to "a practical decision, like the choice of an instrument" which "depends chiefly upon the purposes for which the instrument – here the language – is intended to be used and upon the properties of the instrument" (43). Thus whereas Quine envisions ontological quandaries, Carnap discerns practical inquiries regarding which piece of linguistic machinery we could adopt. More succinctly put, Carnap states in his autobiography that Whether or not [the introduction of a linguistic framework – CFF] is advisable for certain purposes is a practical question of language engineering, to be decided on the basis of convenience, fruitfulness, simplicity, and the like. (Carnap, 1963a, 66) Here, the term "language engineering" may be understood in the context of what Carnap later calls "language planning" and is arguably indicative of his earlier interests in the development of artificially created natural languages, like Esperanto, after the First World War (Carnap, 1963a, 68; see Friedman, 2007). In chapter 4, we will return to how Carnap himself employs 40 For an earlier example of Carnap treating logic like a tool, see the last paragraph of LSL. 29 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer this engineering analogy to help explain his work on inductive logic. But now I want to shift the reader's attention to how this analogy has been used in the current Carnap reappraisal literature. Richard Creath uses the engineering analogy to help explain how Carnap addresses worries about adopting a non-circular account for the justification of beliefs about the basic postulates of a theory (Creath, 1992, 142-149). Typically, such a body of beliefs would be justified in terms of (metaphysical) intuitions but yet Carnap, according to Creath, rejects this presupposition. "The axioms or postulates," Creath says of Carnap, "need no further epistemic justification because a language is neither true nor false, and one is free to choose a language in any convenient way" (1992, 144). Instead, it is we who can lay down such axioms and postulates and it is we who investigate where they lead us. For Carnap there is no further question about getting things "right" above and beyond the choice of these axioms or postulates: "the postulates (together with the other conventions) create the truths that they, the postulates, express" (e.g. see Creath 1992, 147). Any talk of epistemic justification is now to be replaced with a pragmatic inquiry: which postulates better or worse explicate some body of beliefs with respect to our theoretical needs? Creath calls this process of constructing and choosing satisfactory postulate systems - systems which are themselves revisable – "the engineering task of examining the practical consequences of adopting this or that system" (1992, 154). In what amounts to a crucial passage for understanding this engineering analogy for the context of Carnap's work on inductive logic, Creath applies this engineering conception to the example of the traditional problem of induction. Because for Carnap the "pragmatic cost" of a language without "inductive rules" would be too high, "the question is not whether to have inductive rules, but which": Here again the matter is one of pragmatic comparison. If the rules are too weak, then we foreclose or complicate useful inferences. If the rules are too strong, then there is an increased chance that one inference will conflict with another, thus requiring constant and costly revision. The virtues of security as contrasted with those of educational adventure will be weighted differently by different people, but we need not all agree so long as we make our respective choices clear. There is no uniquely correct system, and the choice among the alternative is pragmatic. (1992, 154) Thus it seems the usefulness of alternative constructions, according to Creath, can be explained in terms of an instrumental conception of rationality: relative to the adoption of some standard 30 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer of evaluation V as a measure of our intellectual ends, X is a better choice than an alternative Y just in case X better satisfies V than Y . Notice that nothing has been said about why we would adopt V – all that is relevant is how the alternatives X and Y measure up, so to speak, to the demands placed on them by V . The same seems true for engineering: our practical needs and wants provide the standards for what we want to happen in the world but engineering, by its very nature, cannot inform us what our needs and wants should be. When it comes to Wissenschaftslogik, all we can do, it seems, is to specify our logical languages in as much detail which seems necessary and then investigate and evaluate which of those languages will fit our theoretical needs. But that doesn't mean that we must somehow produce a well-ordered preference ranking of logical languages. "Inconsistent languages," says Creath, "are pragmatic disasters, and so are languages without inductive rules": It is not necessary to establish that a language is maximally or even minimally convenient before using it, but philosophic discussion (where it is not wholly misguided) must be pragmatic. Qua pure logicians our job is merely to trace out the consequences of this or that convention. This is an engineering conception of philosophy. (Creath, 1990b, 409) Exactly how Carnap can "trace out the consequences" of alternative inductive logics and then weigh the extent to which those consequences satisfy the wants and needs of scientists is a topic we will return to in chapters 3 and 5. An alternative way of understanding the engineering analogy, due to Samuel Hillier, explains Carnapian Wissenschaftslogik as an engineering activity tasked with producing a linguistic model of some empirical phenomenon. Specifically, Hillier (2007) attempts to provide an interpretive framework for understanding the Carnap reception literature by distinguishing between two independent interpretations of Carnap's logic of science. The first project, which Hillier dubs "THERAPY," focuses on the work of scholars like Thomas Ricketts and Warren Goldfarb which, according to Hillier, is concerned with explaining why, for Carnap, most epistemological and metaphysical problems are transformed into pseudo-problems, or problems without cognitive meaning (see Hillier 2007, 148 ff., especially 152-3). The second project, dubbed "EPISTEMOLOGY," concerns the interpretive work by Michael Friedman and Alan Richardson. Here the emphasis is on Carnap's Wissenschaftslogik in the 1930s as the study of the language of science, a study grounded in the clear separation of logical and psychological concepts. Hiller explains this project with an analogy to physics: in lieu of questions about 31 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer the justification of the use of ordinary language and concepts, both the scientific philosopher and physicist "[make] certain simplifying assumptions and construct a model of that language" (160). Hillier argues that, according to the second project, Wissenschaftslogik is made up of two parts. First, guided by the logical principle of tolerance, the logician constructs any number of models – here understood in the sense of logical languages – and, second, the logician locates some notion of "fit" between these linguistic models and scientific activity; after all a model, Hillier explains, is "accepted only if it accords with the thing that is to be modeled sufficiently well" (2007, 161). However this notion of "fit," argues Hillier, is not characterized within a logical language but is rather defined relative to whether particular models offer "more accurate representations, or are easier to work with, or whatever other advantages are usually associated with modeling in science" (162). In other words, when speaking of what it means, for Carnap, to prefer one language over others Hillier seems to assimilate together the syntactic preferences we may have for a language along with empirical measures of "fit" defined over pairs of linguistic models and the way the world happens to be.41 Consequently, Hillier's Carnap no longer seems to repudiate language-transcendent facts; instead, Carnap is now interpreted as appealing to a notion of "fact" precisely in the sense of what is being modeled or represented independently of a linguistic framework (186). Hillier then argues that once we stitch together these two interpretive projects, THERAPY and EPISTEMOLOGY, we end up with a "linguistic engineering" interpretation of Wissenschaftslogik (171). THERAPY is now understood as the conventional processes of designing models, and EPISTEMOLOGY is the empirical process of analyzing the language of science by "fitting" these models to the language scientists use (172). Specifically, Carnap's principle of tolerance, argues Hillier, applies only to formal languages, languages which can then be freely constructed (169, 182-3, 186). Those freely constructed languages now not only function as tools but as models for the language of science. Thus, for Carnap, "there is a fact of the matter that needs to be respected, namely the actual, logical structure of the language of science" and there is likewise a fact of the matter "whether or not the chosen formal language is a good model for the 41 This notion of fit, for Hillier, is a measure of how well an explicated concept is similar, really in terms of truth-preservation, to the "target concept" (165). 32 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer actual language of science" (187). In light of the summary of Carnapian logic of science I provided at the beginning of this chapter, we should find Hillier's discussion of Carnapian Wissenschaftslogik as depending on some notion of "fit" between logical and scientific languages to describe Carnapian logic of science rather odd. For starters, this notion of "fit" is a notion neither Friedman nor Richardson readily adopt and, secondly, both Friedman and Richardson take seriously Ricketts's suggestion (see above) that, for Carnap, there can be no appeal to language-transcendent facts. Indeed, the central presupposition of Hillier's version of Carnap as linguistic engineer is that the logician has ready access to some notion of an "accurate representation" which can be used to gauge the "fit" of any one of the language frameworks the logician may freely construct. But what is so revolutionary, philosophically speaking, about Carnap's logic of science is its lack of any inprinciple reliance on any robust notion of empirical or logical truth, representation or meaning. This is the difference between Hillier and Creath's versions of the engineering analogy. Hillier's notion of "fit," however, is perfectly understandable to Carnap after both a proposal has been made and accepted to adopt a principle of empiricism and a logical language has been applied to some empirical science. Within this applied context, Hillier's notion of "fit" can be defined pragmatically, viz. as denoting the sort of inter-theoretic considerations actual scientists employ to rank hypothesis given their evidence. Indeed, for Ricketts, Richardson and Friedman, Carnap's commitment to empiricism is an expression of an attitude no different from the expression of an attitude of logical tolerance. "Carnap's lessons are historical and formal," says Richardson, the epistemic success of the exact sciences is revealed in their history and is due more to precision and power of formal and mathematical techniques and how they are developed in empirical knowledge than to any other aspect of such science. Carnap sought to understand that process through the introduction of the self-same techniques and the self-same tolerance of formally precise linguistic forms in philosophy that one finds in the exact sciences themselves. This precision can then for the first time make tolerably clear what someone is committed to in being committed to, for example, empiricism. (Richardson, 2004, 74) The standards of scientific discourse provide us with an example of the use and power of formal and mathematical techniques and Carnap proposes that we adopt these standards as we investigate the foundations of sciences using the artificial languages under active development by logicians and mathematicians – logic is, for Carnap, an instrument but it is an instrument which is not assessed as a part of Wissenschaftslogik on the basis of its representational properties. 33 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer Exactly here, however, the reader may begin to worry that Carnapian logic of science rests on an untenable circularity: The proposal to adopt a principle of empiricism affords a wissenschaftslogiker the conceptual resources required to apply their logical system to the empirical sciences but yet these resources are the very notions in need of philosophical clarification or explication. To adopt the language of explication, only through the creative, engineering, act of constructing many different logical frameworks can we map out, so to speak, the possible ways constructing different explicata. But because the explicandum is vague to begin with, there is no meaningful way to figure out whether any particular explicatum is "correct" or not without, it seems, appealing to extra-logical information about the applicability of each explicatum. One way of trying to make sense of this circularity is articulated in Carus (2007). There, Carus locates a "dialectical" relationship implicit in Carnap's views which conceptually comes prior to Carnapian logic of science between, first, "the evolved systems of intuitively available concepts interwoven with ordinary language" and, second, "the constructional systems of scientific and mathematical knowledge" (x). Carus then periodizes Carnap's intellectual thought into two stages. The first stage is similar to the project of the Aufbau; it is concerned with providing an objective (or rather, intersubjective) rational reconstruction of our scientific knowledge within a constitutional system, a system which is understood to replace our "evolved" conceptual system. The second stage is associated with Carnap's adoption of a principle of logical tolerance in LSL and amounts, according to Carus, to the implicit recognition of a dialectical relation between our evolved and constructed conceptual systems (x-xi). Importantly, it was this first stage of rational reconstruction which centers on the question of how "to decide – from some overall viewpoint resting at any moment, of course, partly on intuitions – what intuitions we want; which ones to keep and which to supersede" that Carus describes as an "engineering task" (17).42 Here Carus turns to a distinction Carnap makes in 1950 between internal and external questions – where internal questions are questions framed within a language system and external questions are practical questions about which language system we are willing to adopt – to explain this dialectical relationship. In some places, for example, Carus also adopts the vocabulary of "hard" and "soft" concepts to distinguish between 42 Carus is here talking about our intuitions concerns which features of a logical languages we find preferable to others. 34 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer constructed logical systems intended to replace "evolved" language and the decision to adopt such logical systems made from the standpoint of natural, "evolved" language, respectively. Indeed, for Carus, it is this standpoint of a "context of action, which overlaps to some degree with the Lebenswelt in which the participants articulate the values and preferences that guide their choices" (279-80). Carus here points to work by Howard Stein (e.g. Stein, 1992; 2004) in order to articulate a certain dialectical relationship between these "soft" and "hard", or evolved and constructed languages: [t]he explicative interaction between evolved and constructed systems takes the form not of wholesale replacement or superimposition, for Carnap, but of piece-meal exchange within the context of a dynamic mutual feedback relation. (Carus 2007, 278) Unlike Carnap's early method of rational reconstruction in his Aufbau, which replaces our evolved language with constructed language, explication, as Carus understands it, involves a feedback relation between the evolved and constructed concepts. This transition from rational reconstruction to explication signifies, according to Carus, the second stage. Here we are tasked with an engineering question concerned with whether the results of the above feedback relation are satisfactory for our practical ends. It is important to Carus that when talking about explications that we distinguish between the task of clarification, which amounts to a sort of initial analysis of a vague concept, and the practical task of constructing and choosing between alternative explicata as (partial) replacements of the vague concept (2007, 20-1; 265-272; 278-9). Carus sees both tasks as crucial to an ideal of explication as a method suitable for reviving earlier Enlightenment projects: clarification concerns the tallying of preferences - as "desiderata" of different disagreeing parties (at least for the context of scientific communities) concerning which languages they prefer - and the choice of a language concerns the provision of a framework which the respective parities may each modify until a satisfactory language is found (30-1). As Carus clarifies, [t]his is not simply a mechanical task of pasting together two incompatible languages; it obviously requires creative ingenuity - this is conceptual engineering. The outcome depends on the quality of this engineering. Occasionally a perfect synthesis can be found, but usually the solution in such cases is something of a compromise, which in practice fails to satisfy at least a few disputants on the fringes. These can then go on arguing, demanding that the compromise be reviewed, or they can walk out and start a new discipline. Such an engineering failure can always be attributed to the impossibility of the task, but it can never be known for certain whether better engineering might not after all have done the trick in the end. (Carus 2007, 31) 35 2.3. Carnapian wissenschaftslogiker as Conceptual Engineer Carnapian conceptual engineering, according to Carus, is a piece-meal, dialectical process for which there is no guarantee of success. In the last two chapters of his book, Carus attempts to flesh out Carnapian explication as an ideal of explication – an ideal because Carnap, according to Carus, himself never saw these implications for his project clearly – continuous with Enlightenment ideals which would function as conceptual resource helpful (say) for resolving disputes in political theory (like the debates between Rawls and Habermas) by allowing us to use the above conventional framework to "engineer" concepts, for example, "to serve as tools for social and political interaction" (305). Thus we have a picture of how the circularity of Wissenschaftslogik can be explained: there is a dialectical relationship between (1) appealing to our "evolved" languages in order to clarify concepts and (2) replacing these "evolved" concepts with logically engineered concepts modeled loosely on the clarification of the "evolved" concepts. Another way of making sense of the circularity of Wissenschaftslogik is by drawing attention to the fact that Carnap's talk of treating languages as tools seems to coincide with Carnap's early work on empirical concept formation (e.g. in Carnap, 1926) and Carnap's early interests in the study of measuring instruments, or Instrumentenkunde, as practiced in the 19th century German-speaking world (see Richardson, 2013, 61-5).43 Just as metrologists use instruments as tools to define concepts of measurement, Carnap uses his metalogic, analogized as an instrument, as a tool to define, and so making explicit, certain scientific and philosophical concepts. As Richardson points out, Carnap himself says as much in the preface to Carnap (1943); it is there that Carnap tells us explicitly that he regards his semantics as "a tool, as one among the logical instruments needed for the task of getting and systematizing knowledge" (Carnap 1943, viii-ix). Logic for Carnap, then, is a tool which can be used to enhance our mental capacities. The engineering analogy, for Richardson, is not an analogy (or metaphor) at all. Instead, "Carnap's considered view," says Richardson, "was that as a philosopher he engaged in the development of conceptual technologies for science and the science of science. This is Carnap the conceptual engineer" (2013, 65).44 43 Carnap's explications of prescientific concepts mirror, to a certain extent, the process scientific concepts undergo over time of becoming more exact or precise (e.g. see Chang, 2004). 44 Importantly, besides criticizing Carus's reading of Carnapian explication as belonging to the tradition of the Enlightenment, Richardson also raises various worries about the received importance of Carnap's "technical" conception of philosophy (2013, 71). 36 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century 2.4 Carnap and the State of Inductive Logic at mid-Twentieth Century Starting with this section, for the rest of this dissertation we will focus less on Carnap's views on logic and mathematics, including his conception of logical syntax and semantics, and more on how he uses these conceptual resources as instruments for clarifying the foundations of probability and induction. By chapters 4 and 5, we will see how Carnap's work on inductive logic, which includes both his work on pure inductive logic and how inductive logic can be applied to decision theory and theoretical statistics, can be seen as a conceptual engineering project: Carnap shows us how his work in inductive logic could possibly be of use in the empirical sciences, especially when those sciences depend on a logical meaning of probability.45 Moreover, it is important to keep in mind that Carnap's later work on inductive logic is an example of how his earlier Wissenschaftslogik can be extended : in the 1930s, after all, notions like "confirmability" and "testability" were understood to be pragmatic concepts which were treated as primitive notions in the syntax and semantics of science – but by the mid-1940s, Carnap was attempting to explicate qualitative, comparative and quantitative concepts of degree of confirmation as semantic concepts defined in terms of L-concepts.46 By the time Carnap first published on logical probability in 1945, the year which marks the end of the Second World War, other members of the scientific philosophy movement – including both members of the so-called Berlin Circle, like Hans Reichenbach and Richard von Mises, and the Vienna Circle, like Herbert Feigl – had been working on philosophical issues on probability for the better part of three decades.47 But even Reichenbach, von Mises and Feigl were investigating a mathematical theory which was by the 1920s already quite mature: Reichenbach's own contributions to the axiomatization of the probability calculus aside, the early twentieth century saw the rigorous axiomatization of the classical theory of probability 45 Nowadays it is customary to talk about the philosophical problem of how to interpret probabilities and to speak of different interpretations of probability. However, to avoid confusion with the interpretation of a logical calculus with the interpretation of probabilities I instead adopt the nomenclature of taking about the meaning of probabilities. 46 Although Carnap talks about all three concepts I focus excessively on classes of quantitative concepts. 47 Influenced by scientists like Henri Poincaré and Johannes von Kries, Reichenbach writes (incidentally, during the middle of the First World War) his 1916 dissertation on the concept of probability, which is aptly named "The Concept of Probability in the Mathematical Representation of Reality" (published in English as Reichenbach, 2008). In the 1920s and 1930s, Reichenbach then writes a number of papers in which he articulates a notion of probabilistic implication in a multi-valued logic; a summary and extension of his views can be found in Reichenbach (1935), which is in 1949 expanded and translated into English as Reichenbach (1949). See chapter 5 for more information regarding both Reichenbach and Feigl's views on probability. 37 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century originally formulated in the seventeenth and eighteenth centuries by scientists like Laplace, Leibniz and members of the Bernoulli family and a new measure-theoretic understanding of probability was deeply intertwined conceptually with the rise of probabilistic thinking in the empirical sciences, especially statistical mechanics and sociology.48 However, for members of the Vienna Circle, the central philosophical issues concerning probability and induction didn't so much concern the mathematical details of the probability calculus but rather how to understand the meaning of these probabilistic and inductive concepts – specifically, how to reconcile their empiricism with scientific endeavors which purport to produce scientific knowledge which trades in chances and uncertainties rather than certain knowledge and truth.49 It will come in handy to first provide the reader with a simplified version of the probability axioms. Without discussing too much mathematical detail, a continuous, finitely additive, probability function is characterized by the tuple (Ω,F , P ) satisfying the following axioms: 1. P (Ω) = 1 (and, in virtue of P being a measure, P (∅) = 0). 2. P (A) ≥ 0, for all A in F . 3. (Finite Additivity) For any pairwise disjoint sequence of subsets of Ω, A1, ..., An: P ( ∪ Ai) = P (A1) + * * *+ P (An) = ∑ P (Ai). Here Ω is called the outcome space and is a set of the most basic, exclusive and independent, events; F is the event space which is a set of subsets of Ω whose elements characterize all possible events; and, lastly, P is a measure function defined over the elements of F which satisfies axioms 48 For systematic historical accounts of these developments see Gillies (2000); Hacking (2006); Keynes (1921); Porter (1986); Stigler (1986; 2002); Todhunter (1865); Von Plato (1998). Reichenbach and von Mises did attempt to provide their own axiomatization for probability theory; however, most modern textbooks on the mathematical theory of probability follow the approach to characterizing probability theory, e.g., as found in Billingsley (1995); Durrett (2005); Feller (1968), more or less follows the same approach as found in Andrey Kolmogorov's 1933 axiomatization of probability theory using measure theory, published in German as Grundbegriffe der Wahrscheinlichkeitsrechnung ; other important early axiomatizations include Koopman (1940) and the work of the Polish, and Jewish, mathematician Janina Lindenbaum-Hosiasson, like HosiassonLindenbaum (1940), before she and her husband, Adolf Lindenbaum, were murdered by the Nazis in 1942 (see Pakszys, 1998). 49 I say this only to begin to characterize the philosophical problems of probability that Carnap would have been familiar with in the 1940s (e.g., as discussed in Nagel, 1939). Of course, mathematical questions about the foundations of probability theory have and continue to be of interest, especially questions concerning the nature of how to handle infinite sequence or events which give rise to problems like the St. Petersburg paradox or how conditional probability functions should be defined; e.g., see Bartha et al. (2014); Easwaran (2014); Hájek (2003; 2012). However, because Carnap either did not concentrate or was not aware of these problems (and as important as they are to contemporary formal epistemologists and philosophers of probability) I will not discuss them in the dissertation. 38 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century 1-3.50 Conditional probabilities are then typically introduced by definition: the probability of event A given event B, or P (A|B), is defined as the following ratio,51 P (A|B) =Def P (A ∩B) P (B) . For self-proclaimed empiricists like von Mises, Reichenbach and Feigl, probabilities are defined over sets of hypothetically, but physically possible, sequences of events, viz., as the hypothetical limit of an observed relative frequency of some property which holds, or does not hold, of each event.52 For example, letting the possible results of flipping a coin infinitely many times be characterized by countably many sequences of random variables, viz. a function Si : Ω → {0, 1} where '1' denotes heads and '0' tails, if after a coin has been flipped n many times and m many heads in this sequence have so far been observed, then the relative frequency of heads in the observed sequence up to the n-th flip of the coin is the ratio m/n, which can also be expressed as the average Sn = 1n ∑n i=1 Si. The trouble frequentists like von Mises and Reichenbach have, however, is that it is isn't clear how – if at all – probabilities should be assigned to singular events, i.e., events about which no relative frequencies have so far been observed. For example, consider one of the central results from classical probability theory: the (weak) law of large numbers (LLN). This law states that, for any infinite sequence of independent and identically distributed binomial random variables X1, ..., Xi, ... with range {0, 1} and assuming that each trial Xi has the same mathematical expectation, or mean, μ (i.e., E[Xi] = μ for all i = 1, 2, ...), then for all positive real numbers ε, lim n→∞ P ( |Xn − μ| > ε ) = 0. In plainer English, the result says that for any error term with a value in the positive real number line, as the number of trials approaches infinity, the probability that the absolute difference between the observed average and expectation of the trails is greater than the error term is equal 50 More specifically, F is a sigma-algebra defined over Ω. This means not only that ∅, Ω ∈ F , but that if for all Xi, Xj ∈ F , where Xi, Xj are members of a countable set X ⊆ Ω, then: Xi ∈ F , Xi ∪ Xj ∈ F and Xi ∩Xj ∈ F . 51 Assuming, of course, that P (B) = 0. 52 Actually, von Mises defines probabilities relative to a kollektiv, which is an infinite sequence of independent trials, like tosses of a coin. However, the technical differences between von Mises and, say, Reichenbach's work on probability theory are not essential to the present point. 39 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century to zero.53 The frequentist has basically two problems if they want to apply this mathematical result from probability theory to any actual empirical sequence of events; for example, as a way to infer the value of the expectation that the same coin will land heads up when flipped based on the current relative frequency of heads for a large number of trials. The first problem is making sense of the mathematical assumption that, if the random variables Smi , i = 1, ...,m, record the observed results of the first m many flips of the coin, that for some value μ∗, E[S1] = E[S2] = * * * = E[Sm] = μ∗. It is not at all clear how a frequentist can justify the claim that E[S1] = μ∗ (after all, the relative frequency of heads for only one flip which lands heads is 1/1 – but surely that doesn't license us to claim that μ∗ = 1). Obviously the frequentist can just assume that the coin does, in fact, exhibit a particular statistical distribution and then study different variations of LLN based on what kind of distributions Smi has for large m. 54 The second problem is conceptually related to the first: how does the frequentist know, on the basis of their observed relative frequencies, that as m reaches infinity, the limit of Smi exists? Even if after a million flips of the same coin in the same kind of way the observed relative frequency is, with an acceptable amount of error, very close to the value 1/2, couldn't it still be physically possible for "Nature" to all of a sudden "decide" to switch course and cause the coin to consistently land heads or tails, at least for the foreseeable future? The frequentist can provide no guarantees that this would be implausible: the physical structure of the coin may cause it to exhibit radically different statistical distributions in the long run.55 Alternatively we could instead explain what probabilities mean not in terms of observed frequencies but instead we could represent the event space F in terms of the sentences (or propositions) contained in a logical system L and then define a probability function over these linguistic events. With a few modifications to the above probability axioms, the basic idea is that we should be able to define a quantitative logical probability function, Pr, over the basic sentences in L and then define a conditional logical probability function similarly as above, for 53 Note that just because an event A has probability zero, that does not mean that A is impossible. If the probability function μ is a Lebesgue measure defined over the unit interval, for any finite set B ⊂ [0, 1], even if B = ∅, it is the case that μ(B) = 0. 54 This is essentially what Reichenbach does; see §§49-51 of Reichenbach (1949). 55 For Reichenbach's attempt at a solution, which we will return to again in chapter 5, see his chapter 9, called "The Problem of Application," in Reichenbach (1949). 40 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century any two sentences A,B in L (where B is not L-false) Pr(A|B) =Def Pr(A ∧B) Pr(B) . Conditional probabilities are of particular interest to proponents of the logical concept of probability as Pr(A|B) can be thought of a generalization of logical implication: if B logically implies A, then Pr(A|B) = 1 and, otherwise, Pr(A|B) can be thought of as a relation of support or confirmation for how much the sentence B supports or confirms the sentence A. The suggestion that a logical notion of conditional probability captures, in some sense, some liberalized notion of logical implication is not my own invention; the idea can be found, for example, in the writings of a member of the Vienna Circle, Friedrich Waismann.56 Following Johannes von Kries and Ludwig Wittgenstein, Waismann defines a logical probability function by assigning equal probability values to each sentential description of each basic event – in other words, equal prior probabilities are assigned to the most basic, exclusive and collectively exhaustive, events.57 Probability values for more complex sentences are consequently fixed by the choice of these prior probabilities and can be found by using the probability calculus.58 Once provided with a probability function Pr which is well-defined over the sentences of L, it is simple enough (if not cumbersome) to make sense of the weak LLN. For example, we could codify sequences of coin flips by interpreting individual constants, 'a1, a2, a3, ...' as denoting instances of a coin flip and then interpret the descriptive predicate 'H(ai)' to mean that the coin flip denoted by the constant 'ai' landed heads up. Then we simply need to supply the prior probabilities for the coin landing heads, e.g., as a single fixed value μ, Pr(H(ai)) = μ, for all individual contents indexed by i. Finally we could let the sentence Em describe the observed results of flipping a coin m many times and then define Em as the relative frequency of how many times the predicate H(x) holds for the first m many individual constants indexed with the natural numbers 1, 2, ...,m. There is the additional problem of how to handle infinite sequences in L (L has to contain infinitely many individual constants) but stating a version of the weak 56 See Waismann (1930). 57 For the details, including Waismann's connections with von Kries and Wittgenstein, see Heidelberger (2001). 58 Two useful consequences of the probability calculus are Bayes's rule and the principle of total probability. Bayes's rule says that, for any tuple (Ω,F , P ) as defined above, P (A|B) = P (A)×P (B|A) P (B) and the principle of total probability says that, if {Ai} is a partition of Ω, for any B ∈ F , P (B) = ∑ P (Ai) × P (B|Ai). Combining these two rules, we can reformulate Bayes's rule as: P (A|B) = P (A)×P (B|A)∑ P (Ai)×P (B|Ai) , where {Ai} is a partition of Ω. 41 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century LLN for logical probabilities is now just a mathematical exercise (which I won't attempt to demonstrate here). What is important is that proponents of logical probabilities can use the expressive power of a language L to characterize the entirety of the basic possible events as the atomic sentences in L and then express the more complex events as those sentences in L which are formed by carrying out logical operations, like logical conjunction or negation, on the atomic sentences. While the onus on the frequentist is to explain how the mathematical probability calculus can be applied to observed regularities (e.g., whether the limit of a hypothetical relative frequency exists in the limit, or how to assign probabilities to singular physical events), the onus of the proponent of a logical meaning of probability is providing some reason or justification for a method or procedure which assigns probability values to all the sentences in L. This is the central concern with the logical meaning of probability: how do we assign prior probabilities to all the sentences in L and, if we decide to assign probabilities on the basis of some syntactical or semantical properties of L, how could logical probabilities possibly be the basis for guiding our expectations about empirical events in the world; how could logical probability possibly be a guide in life?59 The idea that the same probability values should be assigned to similar events is a persistent theme of classical probability theory and is usually justified by reference to either the principle of insufficient reason or, which is arguably a consequence of the principle of insufficient reason, a principle of indifference, viz. equal probabilities should be assigned to those events which are equally possible – where "equally possible" can be explained in terms of the state of ignorance of a reasoner or some physical symmetry, e.g., like assigning the probability that a coin will land heads to be equal to the probability that it will land tails because of the physical symmetries 59 In the late 1950s and 1960s, there is resistance to the idea of induction being somehow dependent on a mathematically-constructed language. First, Wesley C. Salmon, after corresponding with Carnap in the late 1950s, argues that confirmation functions should satisfy the criterion of linguistic invariance (e.g. see Salmon, 1963). Second, Nelson Goodman's "new" riddle of induction, a problem first introduced (albeit in a different name) in Goodman (1946) and later clarified in Goodman (1955), was widely influential and that problem, in a nutshell, suggests that there is a substantial epistemological problem concerning which predicates are, a priori, the "correct" predicates we should use to formulate inductive claims. As it turns out, however, in a handwritten note on the backside of a letter to Carnap, Goodman admits that he formulated the kernel of his worry only for Hempel, Helmer and Oppenheim's purely syntactic concept of confirmation and before Carnap's own work on inductive logic was published in 1945 (Goodman to Carnap, February 17, 1947; RC 084-19-09). 42 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century of the coin.60 Nevertheless, an important scientific event in the late nineteenth century for the foundations of probability theory was the discovery, e.g., by the French mathematician Joseph Bertrand, that multiple applications of a principle of indifference which assign probability values to the possible outcomes of a physical set-up based on different symmetrical or invariant properties of that set-up may result in the assignment of different probability values to the same outcomes.61 Thus even grand appeals to metaphysical principles, like the principle of insufficient reason, do not afford the scientist a univocal method for assigning probabilities to basic events. But even if this weren't the case, the very idea that probabilities – let alone a scientific method of induction – can be grounded or justified on the basis of metaphysical principles like the principle of insufficient reason or the uniformity of nature is anathema to empiricist strictures. Indeed, the sentiment that inductive methods are of little help to scientific reasoning is voiced by influential scientists like Ernst Mach and Karl Pearson and these inductive suspicions were shared by many members of the Vienna Circle (in contrast to the Berlin Circle, of which Reichenbach and von Mises where both members). In LSL, for example, although Carnap is quite amenable to the idea that the P-rules for a syntax for the language of physics may include probabilistic laws62 – presumably, Carnap has in mind here the kind of frequentist meaning of probability used in physics, especially statistical mechanics – and even though in the history of science there are plenty of examples of scientists appealing to some notion of induction to explain how new laws can be introduced in our scientific theories on the basis of the current evidence, or protocol sentences, Carnap remarks that63 this designation [viz., the method of induction – CFF] may be retained so long as it is clearly seen that it is not a matter of a regular method but only one of a practical procedure which can be investigated solely in relation to expedience and fruitfulness. That there can be no rules of induction is shown by the fact that the L-content of a law, by reason of its unrestricted universality, always goes beyond the L-content of every finite class of protocol sentences. (my emphasis; LSL, 317) Technicalities aside, like what Carnap means by "L-content", the point is simple. Granted that scientific laws hold universally and without restriction in the sense, presumably, that if a law is about some kind of physical set-up then that law holds for all instances of that kind of physical 60 For more on the classical theory of probability see Hacking (2006) and Todhunter (1865). Gillies (2000), in particular, has a nice summary of the problems with the principle of indifference. 61 Keynes (1921) is one of the first to clearly distinguish the principle of insufficient reason from the principle of indifference. Earlier critics of the principle include John Venn, Leslie Ellis and George Boole. 62 See LSL, pp. 314. 63 Similar sentiments can be found in section 1 of Carnap (1926). 43 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century set-up, and, moreover, that no (non-trivial) inductive inference from a finite number of protocol sentences to a scientific law implies that that law holds for all possible future protocol sentences, then no such inductive inference allows us infer the existence of a universally unrestricted scientific law. Thus, if all scientific laws are universally unrestricted, there can be no inductive rules which govern the introduction of scientific laws into our physical language based on a finite number of observational statements. Even if we could amass a collection of recorded observations about whether the sun has risen every day since the invention of cuneiform writing, there is no logical implication from the sum of this solar evidence to the law that the sun will always rise (even despite the fact that this sum of evidence would, given most frequentist and logical meanings of probability, provide probabilistic support for the claim that the sun will rise tomorrow). Induction, for Carnap in the 1930s, is an activity scientists engage in which resists formalization into the logical syntax of the language.64 What distinguishes Carnap's earlier discussions of testability and confirmability from his later work on inductive logic is that his later work treats of a semantic rather than a pragmatic concept of degree of confirmation.65 It is such a semantic concept of confirmation which Carnap claims provides a possible explication of the logical concept of probability and on basis of this semantic concept Carnap then goes on in the 1950s to illustrate how it could be possible to construct an entire network of semantically defined inductive concepts where, so to speak, this semantic concept of degree of confirmation is the semantic knot which bundles the conceptual network together. We will have a chance to untangle this remark in the following chapters and, in the next chapter, I provide a short narrative about the history and philosophy of engineering which I will then use as an interpretive framework for characterizing Carnapian inductive logic. Before ending this chapter, however, I will, first, quickly point to the relevant scientists and mathematicians most influential on Carnap's work on inductive logic and, second, highlight the relevant passages from Carnap's own writings in which he himself provides synopses of the aim of inductive logic. There are four scientists and philosophers working in the 1920s and 1930s on logical meanings 64 As Carnap poses the question to Reichenbach in 1929: "Could we, with the help of some inference process, infer from what we know to something "new," something not already contained in what we know? Such an inference process would clearly be magic. I think we must reject it" (quoted in Coffa 1991, p. 329). 65 In the 1962 preface to Carnap (1962b), Carnap states that in Carnap (1939) and earlier he had a pragmatic and not a semantic concept of confirmation in mind. 44 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century of probability, including the closely related subjective meaning of probability, who are most influential for Carnap when he starts working on probability and induction around 1941.66 For the remainder of this chapter, I discuss this earlier work on logical probabilities and explain how Carnap disambiguates his own work on inductive logic from these earlier views. When the topic of the University of Cambridge occurs in a conversation about Carnap's philosophical views, the intended context typically concerns the emergence of analytical philosophy by philosophical actors like G. E. Moore, Ludwig Wittgenstein and Bertrand Russell. After all, it was Moore and Russell who, in their own separate ways, demonstrated how to philosophize using logical analysis and it was Russell (along with his co-author and teacher, Alfred N. Whitehead) who provided in the 1910s an axiomatization of logical type theory in Principia Mathematica. What is perhaps less well-known is the work on probability and induction underway at Cambridge in the 1910s and 1920s, especially by the Cambridge logician W. E. Johnson, who articulated a logical meaning of probability as a logical relation between propositions, and two of his more famous students: John Maynard Keynes and Frank P. Ramsey.67 In his 1921 book, A Treatise on Probability, for example, Keynes not only provided a detailed philosophical and historical summary of the foundations of probability but he also took the concept of knowledge as primitive and defined probability as a relation between propositions H and E which corresponds to the quantitative degree of certainty, or partial knowledge, of the hypothesis H given the evidence E. Keynes embeds his understanding of the probability relation within a particular epistemological context which he borrows from both Russell and W. E. Johnson. Logical probability relations, for Keynes, are objective and real relations which 66 Carnap first started to seriously think about problems of probability and induction at least as early as 1941, when Carnap was visiting at Harvard (Carnap, 1963a, 36). Feigl, however, dates Carnap's involvement on problems of probability earlier, to 1938. It is was at an APA meeting in Urbana, Illinois that Feigl reports that he "urged Carnap to apply his enormous analytic powers to the problems of induction and probability [. . . ]. Carnap immediately began sketching in many hours of intensive discussion of what later became his great and influential work in Inductive Logic" (Hintikka, 1975, xvii). There is some evidence for Feigl's claim. The Western division of the APA did meet in Urbana from April 14-16 in 1938 and there are documents at Carnap's archive in Pittsburgh concerning a Gewichtslogik written in the summer of 1938 (RC 079-20-02). As will become clear in chapter 5, I think there is reason to believe that Carnap was at this time also thinking about Reichenbach's notion of "weight" from Reichenbach (1938). In a document titled "Weight (degree of confirmation)" from 1941, Carnap defines an absolute notion of weight, 'aWt', and a relative notion of weight, 'rWt', and then defines, with a slight change to Carnap's notion on my part, rWt(a, b) as aWt(a+b)/aWt(b), where a, b represent, arguably, state-descriptions but I can't decipher the German short-hand (RC 079-20-01, p. 1, December 2, 1941). 67 See Galavotti (2005; 2011b), Howie (2002) and, for more on Keynes in general, including G. E. Moore's influence on Keynes, see Skidelsky (2003). 45 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century can be located in the logic of scientific theories. An example from Keynes' book illustrates this point quite nicely: When we argue that Darwin gives valid grounds for our accepting his theory of natural selection, we do not simply mean that we are psychologically inclined to agree with him; it is certain that we also intend to convey our belief that we are acting rationally in regarding his theory as probable. We believe that there is some real objective relation between Darwin's evidence and his conclusions, which is independent of the mere fact of our belief, and which is just as real and objective, though of a different degree, as that which would exist if the argument were as demonstrative as a syllogism. We are claiming, in fact, to cognize correctly a logical connection between one set of propositions which we call our evidence and which we suppose ourselves to know, and another set which we call our conclusions, and to which we attach more or less weight according to the grounds supplied by the first. (Keynes 1921, 5-6) We will see in chapter 5 that Ramsey, in his 1926 article "Truth and Probability," also conceives of probability as a logical relation but is nevertheless critical of Keynes's claim that probability relations are real and objective. Rather than suggesting that there is the degree of belief or certainty which is attached to a proposition attesting to the fact of the Darwinian theory of evolution by natural selection as a consequence of the existence of a real and objective logical relation between that theory and a multifarious collection of empirical evidence (where both theory and evidence are expressed as sets of propositions) Ramsey argues that the degree to which a person is certain in Darwin's theory given their current evidence can be measured as their degree of belief in that hypothesis given their evidence as a function of the betting quotients they would be willing to take defined over the possible states of the biological world. Nevertheless, for both Keynes and Ramsey a logical conception of probability, broadly understood, is closely tied with human psychology and behavior.68 In the 1920s and 1930s, Harold Jeffreys, a Cambridge-based astronomer, geophysicist and mathematician, co-authored with the mathematician Dorthy Wrinch a series of articles on the nature of the scientific method which provided a conceptual platform for his own conception of probability as an inductive logic, viz., a probabilistic logic based on a system of axioms in a way similar to how, according to Jeffreys, Russell and Whitehead based their own work on deductive logic on a number of primitive postulates characterizing deductive reasoning (Jeffreys 1939, 7-8, 16).69 Indeed, for Jeffreys, logical probability is central to the very idea of a scientific method: 68 See the first two chapters of Keynes (1921), including Keynes's notion of "secondary" propositions which he borrows from W. E. Johnson. 69 Jeffreys is perhaps most well-known for his work on geophysics and as a vocal critic of the continental drift 46 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century there is no way to reduce scientific method to merely deductive logic without, says Jeffreys, "rejecting its chief feature, induction" (1939, 2). Borrowing an idea from Karl Pearson's The Grammar of Science, Jeffreys argues that although the "materials" of scientific reasoning will change across scientific disciplines and fields, the scientific method remains invariant: "[t]here must be a uniform standard of validity for all hypotheses, irrespective of the subject" (7). Probability theory is, for Jeffreys, such a uniform standard. Moreover, Jeffreys was not one to shy away from appealing to the restricted use of invariance principles – like the principle of indifference – to assign similar probability values to physically similar or symmetrical events. However, although Jeffreys's reasons for appealing to the symmetries of physical systems to assign logical prior probabilities was couched in metaphysical language, his arguments for appealing to such principles were based less out of metaphysical conviction than methodological necessity. This is the strength of the logical meaning of probability: in the face of uncertainty, or a partial lack of empirical evidence, there are well-defined procedures for assigning similar events the same prior probability values.70 The final actor I wish to mention is the Italian mathematician Bruno de Finetti. First, it was de Finetti who articulated a purely subjective meaning of probability closer in kind to Ramsey's conception of probability than Keynes's or Jeffreys's. Second, from the 1930s to the 1980s, de Finetti made numerous contributions to the mathematical theory of probability theory and subjective decision theory, including defining probability functions over a certain kind of mathematical object called exchangeable sequences – a mathematical result which generalizes Carnap's work on a continuum of inductive methods.71 Lastly, de Finetti's work on a subjective conception of probability directly influenced the statistician L. J. Savage to attempt to lay theory of the formation of Earth's surface as proposed by Alfred Wegener. It was with Wrinch that Jeffreys, partially as a reaction to Broad (1918), put together a general, probabilistic theory of scientific inference based on inverse probability which is in based on the idea that more complex laws are to be assigned higher prior probabilities and more simple laws lower prior probabilities (Howie 2002, 106). These earlier papers inform Jeffreys's two later books, Jeffreys (1931) and Jeffreys (1939). Wrinch, who was a lecturer in mathematics at University College London when she collaborated with Jeffreys from 1919 to 1923, also attended lectures by both Bertrand Russell and W. E. Johnson, was an admirer of Wittgenstein and was something of a personal assistant for Russell until 1921 (Howie 2002, 109). Interestingly, it is may have been Wrinch who first introduced Jeffreys to the logical work of Russell and Whitehead (2002, 90). 70 In this respect, Jeffreys also influenced the statistician and physicist E. T. Jaynes who, in the 1970s, published a series of papers in which he suggested that probability theory can be understood as an extension of deductive logic for which probability values can be assigned on the basis of empirically-informed invariances of physical systems. He then makes use of this logical meaning of probability in his work on information and entropy; see Jaynes (1957a;b; 2003). 71 See Good (1965); Skyrms (2012); Zabell (2005). 47 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century the foundations of theoretical statistics on the basis of decision making under uncertainty (see chapter 5). Indeed, de Finetti was instrumental in providing the mathematical and conceptual framework for Bayesian approaches to statistical and scientific reasoning. Nevertheless, because delving into the complexities of both de Finetti's mathematical work and his philosophical background (which would require us to discuss Henri Poincaré and the early twentieth century Italian pragmatists), save for a few sections in chapter 5, I say relatively little about de Finetti or his various influences, latent or direct, on Carnap in this dissertation.72 These four different ways of articulating a logical, and also a subjective, meaning of probability provide the immediate historical context for how Carnap structured his own project of providing the foundations for a logical notion of probability, a project which is really tasked with showing how to construct many possible inductive logics based on one of many semantic concepts of degree of confirmation.73 For example, when Keynes and Jeffreys write full length textbooks on probability and induction they do not try to re-invent the already extant field of probability theory but rather they attempt to show how their new conceptions of probability can be used to reproduce already well-known mathematical results, like the weak LLN. Carnap follows suit in his own textbook on probability and induction, his 1950 book LFP : although he leaves much of the mathematical details to an unpublished second volume to LFP, the main desideratum for any pure inductive logic is that it can reproduce and recover most of the central results in classical probability theory, statistical inference and estimation theory.74 Moreover, unlike Keynes and Ramsey, Carnap is at pains to separate the psychological and epistemological aspects of inductive reasoning from a purely logical meaning of probability. It is for this reason that when Carnap begins to articulate how to construct an inductive logic he distinguishes between a pure and applied inductive logic, a distinction which is parallel to the distinction between mathematical and physical geometry discussed at the beginning of this 72 That isn't to say that the topic of de Finetti's influence on Carnap, and vice versa, isn't of interest. For example, Richard C. Jeffrey points out that de Finetti cites Carnap's Aufbau as an important influence in one the former's earliest published works; see Jeffrey (1989). 73 As we will see in chapter 4, Carnap in fact is a pluralist about the meaning of probability: the frequentist concept is useful in the empirical sciences, especially physics, whereas the logical concept is more useful in decision making and statistical inference; see Carnap (1945b). 74 I say "most" because Carnap, unlike Leibniz, for example, cannot appeal to any unrestricted use of a principle of indifference to assign logical probability values to sentences in a language (although see Hacking, 1971). Carnap, however, does acknowledge that he must appeal to a restricted version of a principle of indifference to assign logical probability values (Carnap, 1963a, 73). 48 2.4. Carnap and the State of Inductive Logic at mid-Twentieth Century chapter. It is for Carnap a pure inductive logic which can be applied, as we will see in chapter 5 and 6, to either a normative or empirical decision theory just as a mathematical axiom system can be applied to physical geometry: specifically, a semantic concept of degree of confirmation can be applied insofar it is coordinated, or interpreted, as a credence (or credibility) function representing the conditional (or absolute) degree of belief of an actual or ideal agent.75 When reading Carnap's later work on decision theory it is easy to collapse this distinction between pure and applied logic; but Carnap is not one to reify logical concepts: pure inductive logic has no direct implications for how rational agents should believe or act. Only by showing how his work in pure inductive logic could possibly be applied to the empirical sciences does Carnap explain how a logical concept of probability could ever be used to help guide our expectations about empirical happenings. Because Carnap, as a matter of practical decision, commits himself to some principle of empiricism he cannot follow Jeffreys in relying on any metaphysical principle (regardless if its evocation is purely pragmatic) like a principle of indifference to assign probability values over the sentences, or propositions, of a logical system. Or rather, of all the possible logical rules and procedures one could construct in the metalanguage for assigning logical probability values to each sentence contained in the object language, Carnap can at best state proposals or conventions which restrict the admissible rules or procedures which may be employed – but there is no metaphysical or epistemological justification for adopting these proposals or conventions: it is a matter of practical decision. Exactly here the engineering analogy finds its niche: as an interpretive framework, the engineering analogy helps us to explain why Carnap's application of a pure inductive logic could possibly answer philosophical questions about the foundations of probability and induction without adopting any justificatory or more traditionally (normative) epistemological vocabulary. Lastly, one should keep in mind that just as certain members of the Cambridge milieu, like Keynes and Jeffreys, were concerned with providing a logic of probability, Carnap's own ambitions for his work on inductive logic is that it will eventually be possible "to construct a system of inductive logic that can take its rightful place beside the modern, exact systems of deductive logic" (1962b, iii). This language is not metaphorical: as Carnap later explains, 75 See Carnap (1962a; 1971b). 49 2.5. Conclusion inductive logic does not propose new ways of thinking, but merely to explicate old ways. It tries to make explicit certain forms of reasoning which implicitly or instinctively have always been applied both in everyday life and in science. (1953, 189; italics in original) It is here in the early 1950s that Carnap describes deductive logic as a theory of deductive reasoning continuously developed, from Aristotle on through to Frege, which allows us to replace deductive "common sense" with "exact rules" (189). Similarly, by "inductive reasoning," Carnap means "all forms of reasoning of inference where the conclusion goes beyond the content of the premises, and therefore cannot be stated with certainty" (1953, 189 ). Carnap, however, is clear that the point of inductive logic is not to eliminate any "non-rational factors" present in inductive reasoning resembling a "scientific instinct or hunch" (1953, 195). Rather the "function" of inductive logic, says Carnap, is merely to give to the scientist a clearer picture of the situation by demonstrating to what degree the various hypotheses considered are confirmed by the evidence. This logical picture supplied by inductive logic will (or should) influence the scientist, but it does not uniquely determine his decision of the choice of a hypothesis. He will be helped in this decision in the same way a tourist is helped by a good map. If he uses inductive logic, the decision still remains his; it will, however, be an enlightened decision rather than a more or less blind one. (1953, 195-6) The imagery Carnap employs in this quotation assimilates inductive logic to a kind of map or guide and as such highlights the instrumental nature of his work on inductive logic: just as Carnap in the 1930s, as a consequence of his attitude of logical tolerance, treated deductive logic as an instrument, in the 1950s he likewise understands inductive logic as a kind of tool or instrument which may be used, either effectively or poorly, by scientists to make "enlightened," reasoned, decisions as opposed to "blind," arbitrary, decisions. 2.5 Conclusion Carnap tells us that "[t]he history of the theory of probability is the history of attempts to find an explication for the prescientific concept of probability" (1962b, 23). In 1949, Carnap writes to a young Kenneth Arrow saying76 76 The phrase "theory of behavior under uncertainty" occurs only in one place in the published version of Arrow's dissertation, which was funded by the Cowles commission, on page 88 (N.B. Carnap was reading a type-written draft of Arrow's thesis); see Arrow (1951). Arrow uses this phrase in reference to the lack of a theory of behavior under certainty required for investigating optimal economic systems, viz., optimal systems required for centralized planning. 50 2.5. Conclusion You speak of a lack of a well-developed theory of behavior under uncertainty (p. 111). I think that for such a theory not only psychology but also inductive logic would be necessary, and that the lack of such a theory at the present time is due to the lack of the development of a satisfactory inductive logic. I hope to develop at least the foundations of such an inductive logic in my book. (Carnap to Arrow, June 29, 1949; ASP RC 084-04-02) Unlike in LSL where Carnap was primarily worried about the foundations of mathematics and logic, Carnap's primary motivation for engaging in the technical project of constructing an inductive logic is to show how once a satisfactory pure inductive logic is found it can be applied for use in the empirical sciences. This is an historical example for how a scientific philosopher may attempt to use their logical machinery to help clarify the foundations of science. Inductive logic is, for Carnap, an explication of inductive reasoning based on a logical concept of probability; but unlike deductive logic, in the 1940s, the field of inductive logic is still in its infancy. To invoke Carnap's own ocean metaphor, there is a vast ocean of inductive logics which have yet to be explored and only partial methodological guidance exists for Carnap in the 1940s from the statistical sciences regarding which seas are more likely barren than not. In a letter to Hans Reichenbach, Carnap says: As you will see from my book, my objections are not directed against your theory itself. However, I believe, that in order to be applicable to the procedures of science your theory must be supplemented by genuinely inductive concepts. Some parts of your theory, for instance, the rule of induction, inductive inferences, and the concept of posit, contain implicitely [sic] and in a hidden way inductive concepts. Genuinely inductive concepts which I regard as necessary, cannot be reached from your basis, because you want to base everything on the frequency conception. The hidden inductive concepts must be made explicit and be systematized. This, in my view, is the task of inductive logic. (Carnap to Reichenbach, November 18, 1949; ASP HR 032–17–15) According to Carnap, Reichenbach's work on a frequentist notion of probability implicitly contains – if it is to be applicable to the procedures of science – "hidden" inductive concepts which Carnapian inductive logic attempts to make both explicit and systematic. This chapter began with the claim that Carnap attempted to resolve foundational questions in science by proposing that the adoption of a linguistic framework is, in part, practical. More specifically, insofar as foundational questions in science can be formalized and codified within a logical system, the decision to adopt that logical system in contrast to any number of other systems is, in part, a practical decision: it is analogous to choosing an instrument (like a hammer) rather than any number of other instruments (like other carpentry tools) to achieve some task (like pulling a rusty nail from a solid piece of aged timber). The obvious philosophical objection, 51 2.5. Conclusion however, is that surely there is some theoretical, or objective, sense in which this formalization or codification is "correct" or "justified". Carnap argues otherwise: there are numerous ways to construct purely deductive or inductive logics which can then be applied to the sciences in the same kind of way that mathematical geometry can be applied in the empirical sciences. The process of application, moreover, is a methodological process: it concerns how a scientist may choose how to coordinate the logical and non-logical, or descriptive, terms in a logical system with their empirical observations, experiments and measurement devices. There is, for Carnap, no privileged and antecedent notion of the a priori or conceptual reason according to which we can "get things right": this, too, would be a matter of adopting of proposal: it is a practical matter.77 As we will see in the later chapters, Carnap attempts in his work on inductive logic to do explicitly, by a repetition of many purely volitional and creative acts, what over the course of the history of probability and induction several generations of scientists and mathematicians have failed to produce implicitly through first-order scientific inquiry: an adequate inductive logic applicable to the empirical sciences. Or at least this is how Carnap sees things; he is engaged in this task of making explicit and systematizing inductive concepts – for example, by constructing a pure inductive logic and then showing how it could possibly be applied to the empirical sciences – that I suggest is best understood as a kind of conceptual engineering. But these engineered concepts are, as Cassirer's epigraph at the beginning of this chapter may have led us to believe, neither wholly intellectual fantoms nor are they the subliminal fundament of scientific knowledge: for Carnap they are instead self-made concepts which have been designed by us and only imperfectly mirror the jumble of concepts already in use in the sciences and daily life. For Carnap in the 1950s, I argue, there is no deeper philosophical task concerning the foundations of science that we could be engaged in other than the task I would call conceptual engineering.78 77 This historically accurate picture of Carnap stands in stark contrast, I would suggest, to the caricature of Carnap as a foundationalist epistemologist recently popularized, for example, in Chalmers (2012). 78 Of course, there exist deeper first-order mathematical and scientific tasks. The issue of how to characterize Carnap's earlier views (e.g., when he rejects empiricism in Carnap, 1923) and how best to historically narrate the winding path Carnap takes from his pre-Aufbau work to his later work on, say, normative decision theory is a highly complex, historical task which I do not take up in this dissertation – excellent starting places include Carus (2007), Richardson (1998), Uebel (2007) and Frost-Arnold (2013). 52 Chapter 3 Philosophical Method as Conceptual Engineering Philosophically, Carnap was a social democrat; his ideals were those of the enlightenment. His persistent, central idea was: "It's high time we took charge of our own mental lives" - time to engineer our own conceptual scheme (language, theories) as best we can to serve our own purposes; time to take it back from tradition, time to dismiss Descartes's God as a distracting myth, time to accept the fact that there's nobody out there but us, to choose our purposes and concepts to serve those purposes, if indeed we are to choose those things and not simply suffer them. - Richard C. Jeffrey, "Carnap's Voluntarism" (1994) The plaintive call for a new engineering morality expresses a yearning to return to a time when engineers fancied themselves, in words which have already been quoted, "redeemers of mankind" and "priests of the new epoch." With the religion of Progress lying in ruins about us, we engineers will have to relinquish, once and for all, the dream of priesthood, and seek to define our lives in other terms. - Samuel C. Florman, The Existential Pleasures of Engineering, 2nd ed., (1994) Section 2.3 of the last chapter was dedicated to explaining why certain Carnap scholars, like Michael Friedman, Richard Creath, Alan Richardson, André Carus and Samuel Hillier, have attempted to explain how Carnap understood the philosophical significance of his technical projects in logical syntax and semantics by framing those projects as a sort of engineering activity. In this chapter I draw on contemporary work on the history of professional engineering – specifically, on the activity of engineering design in contrast to engineering fabrication, production and maintenance – to help inform what I have in mind by the phrase "conceptual engineering," a conception of engineering which I suggest is more complicated and subtle than a mere implementation of means-end reasoning.79 79 Although software engineering or the history of computer languages is perhaps more closely related to what I call "conceptual" engineering than the automotive and aeronautical case studies I discuss later in this chapter, a detailed examination of these very technical subjects would require more space than I have on offer for this dissertation and I will return to this topic in future work. Moreover, the history of computing engineering is still in its infancy; but see Mahoney (2004). Another avenue of interest is how Wittgenstein's earlier work on aeronautics influenced his views in his Tractatus, see Sterrett (2002). 53 3.1. Engineering as Means-End Reasoning In the next section I motivate this more subtle conception of engineering in a roundabout way. I first point out that if we choose to adopt both a more mainstream philosophical methodology resembling something like logical or conceptual analysis and a conception of engineering understood as the implementation of means-end reasoning then philosophical activity would seem to be completely orthogonal to engineering activity. Second, I provide an alternative, more subtle, conception of engineering design that I suggest, when used as an interpretive framework, provides us with a more systematic account of how Carnapian logic of science is a kind of conceptual engineering. The argument for this last claim, however, is an argument from illustration which spans chapters 4 through 6 of this dissertation. The proof of the pudding, so to speak, is in the eating. 3.1 Engineering as Means-End Reasoning In a series of autobiography remarks from his 2004 posthumously published book, Subjective Probability: The Real Thing, the philosopher Richard C. Jeffrey says that instead of pursuing a doctoral degree in philosophy after receiving a masters in philosophy from the University of Chicago in 1952 under Carnap's supervision he immediately left to work at MIT's Digital Computer and Lincoln Laboratories.80 The reason for his flight from philosophy, Jeffrey tells us, is that he had "observed that the rulers of the Chicago philosophy department regarded Carnap not as a philosopher but as - well, an engineer" (2004, preface).81 Unlike Jeffrey's use of "engineering" in the epigraph at the beginning of this chapter, his 80 In 1955, Jeffrey returns to philosophy as a PhD student at Princeton University co-supervised by Carl Hempel and Hilary Putnam – it is in his dissertation, finished in 1957, that Jeffrey invents his "probability kinematics" (see Jeffrey, 1957). Before Jeffrey returns back to the Socratic fold, however, he works on a classified project on the design of digital computer, codenamed "Whirlwind II." The original Whirlwind computer was first designed in 1947 in what was then MIT's Servomechanisms Laboratory but that lab, facilitated by funds from the Office of Naval Research, was soon merged with the Digital Computer Laboratory (DCL) in 1951. Soon afterwards the DCL was incorporated into the much larger Lincoln Laboratories – which was then composed of five divisions – as a new "Digital Computer" division, or Division 6. Jeffrey was part of Group 62 of Division 6, lead by one Norman Taylor, which was tasked with the "logical design" of a new prototype, "Whirlwind II" or, using the military designation, AN/FSQ-7. Numerous archival material is now available online through MIT's Dome archives testifying to this fact. For example, while at DCL, between 1953 and 1955, Jeffrey wrote at least five internal memorandums on logical networks and their algebra and according to one internal report for Division 6, there is a now unclassified memorandum 6M-3268 titled "Crosstell Input Element Specifications" written by Jeffrey (and other authors) dated January 6th, 1955 (MIT Dome, 6D-52-1, CASE 06-1104). For more on the history of Division 6 and their later contribution to the Semi-Automatic Ground Environment (SAGE) air defense system, see Redmond and Smith (2000). 81 Apparently, in a 1938 letter to Richard McKeon – who was then the head of the philosophy department – Morris Cohen describes Carnap as a "technician" (personal communication with Alan Richardson). 54 3.1. Engineering as Means-End Reasoning use of the word "engineering" to describe the attitude of Carnap's peers at Chicago toward Carnap's technical work has a pejorative connotation. Jeffrey doesn't spell out exactly why labeling a scientific or technical philosopher an "engineer" is an intellectual slight but the basic idea seems simple enough to grasp. If the point of philosophical discourse is to arrive at, for example, representations of what the world is really like, what we ought to value unconditionally or how we ought to act in ethical dilemmas, then – seeing as how engineers are only concerned with getting us from the state of affairs A to the state of affairs B – no amount of engineering technique or know-how could possibly afford us with answers to either (i) metaphysical or epistemological questions like what states of affairs really are or how we can have knowledge of them, or (ii) axiological questions like why we should value certain states of affairs over others. Regarding the first set of questions, philosophers qua engineers must borrow their methodology and language from the empirical sciences and thus must already be committed to a metaphysics and epistemology consistent with a scientific worldview. Regarding the second set, philosophers qua engineers cannot explain why we should value engineering solutions to philosophical problems which promote human well-being and flourishing, reveal truths about the world and ourselves, or are simple and elegant any more than professional engineers qua engineers (rather than, say, members of a democratic community) can explain why their projects should reduce harm, be economical or be aesthetically pleasing.82 Another way of explaining the point is as follows. As I cannot pretend to know how to begin to accurately capture all the key features of contemporary, analytical philosophical methodology within a single framework, I ask for the reader's patience as I outline the framework for an idealization of a philosophical method resembling Socrates's elenchus or Moorean logical analysis. Simply put, these are methods which search for the "truth."83 There are three steps required to answer a philosophical question of the form "What is the nature of X." The first step is the provision of a well-defined space of possibilities for what X could possibly be; for example, a space of reasons, possible worlds, concepts or sets of (true) propositions. The second step is the provision of empirical, conceptual, or logical plausibility constraints which are used 82 Engineers, after all, can function quite well in totalitarian or communist societies. For general introductions to the nature of engineering, especially the importance of engineering failure and the relationship between engineering and the humanities, see Florman (1996); Petroski (1992; 2012); Vincenti (1990). 83 For a more sympathetic rendering of contemporary logical or conceptual analysis, see Glymour and Kelly (1992); Soames (2003); Williamson (2007). 55 3.1. Engineering as Means-End Reasoning to evaluate, in some way, segments of this space of possibilities; here I have in mind not only conceptual notions like a priori or conceptual truths, notions of rational agency or accounts of mental representation but also additional sources of information or knowledge, like commonsense, sense data or descriptions of phenomenological experience. Third there is some method of search which, either literally or figuratively, searches through this space of possibilities in an attempt to locate, or at least get closer to, the correct, or target, possibility in the possibility space; most common, for example, is the method of putting forward arguments for the meaning of X until either a counter-example is found relative to the current plausibility constraints, upon which the argument is then modified, or an entirely new argument is put forward and analyzed.84 According to this idealized model of logical or conceptual analysis, philosophical activity is likened to the formation of a kind of optimization problem. Figure 3.1: Means-end Model of Engineering. Engineering, as defined in an introductory book to engineering, is "in its most general sense, turning an idea into a reality - creating and using tools to accomplish a task or fulfil a purpose" (Blockley 2012, 1). Characterizations of professional engineering like this one tend to convey it simply as the activity of physically implementing a plan or design which resulted from instrumental, or means-end, reasoning. According to this view, what I call the "means-end model of engineering," the activities of engineering itself (represented by the lined box in Fig. 3.1) are sharply separated from the activities, products and decisions of non-engineers, including whomever hired the engineers and practicing scientists. In order to do their jobs, engineers only need to know about the information from two "inputs" outside of their discipline (represented 84 For example, an algorithm for this method of search may look something like this: (S1) Construct a new valid argument Γ for the correct meaning of X which is located in the space of possibilities; (S2) Check the soundness of each premise in Γ against the plausibility constraints; (S3) If the meaning of X given by Γ does not pick out any concept in the space of possibilities given the current plausibility constraints then stop the current search and repeat step (S1); (S4) If a counter-example is found for a premise in Γ then modify the faulty premise and replace it with a new class of premises, call the new argument Γ∗; Repeat step (S2) for Γ∗; (S5) Output Γ as the correct characterization of X. 56 3.2. Engineering Design by the directional arrows in Fig. 3.1): first, the "inputs" from the employer, including the engineering problem itself, design specifications and safety/economic/resource/time-sensitive constraints; and, second, the "inputs" from the mathematical and empirical sciences, including empirical theories, predictions and models which can be adapted to specific engineering problems and tasks. Thus, as technically complicated and interesting as the problems of engineering may be, the job of engineers is nevertheless essentially instrumental: they use their scientific expertise to design and construct physical artifacts – the "output" – which they expect to satisfy, at least to the best of their ability, the values, needs and constraints specified by their employer (and professional codes of conduct, industry regulations and so on). It seems to be a consequence of both this means-end conception of engineering and the conception of philosophical method as a search for truth that engineering is only relevant to philosophical activity after philosophers have formed some consensus as to how to formulate their philosophical investigation in the sense that they agree on which space of possibilities, plausibility constraints and a method of search should be used to solve the philosophical problem. All the engineer has to do is then use their expertise and technical know-how (e.g., by writing up a computer program or drawing up complicated flowchart diagrams) to search for the best possibility from the space of possibilities given the plausibility constraints. But then there seems to be little overlap between engineering and philosophical activities: on the unlikely occasion that a group of philosophers actually does come to agreement as to how to formulate a philosophical question then – paradoxically enough – philosophical inquiry itself seems to come to an end. What is left is merely the figuring of a technical answer to a technical question. All that is left is engineering. 3.2 Engineering Design There is a growing consensus amongst historians of engineering that a simple means-end conception of professional engineering is misguided; specifically, there is active resistance to the idea that engineering is best explained as an applied science.85 Rather than viewing engineers as technicians who apply ready-made products from the empirical sciences to construct artifacts, 85 For more nuanced discussions of the nature of engineering, including engineering design, see Dym and Brown (2012); Johnson (2009); Vincenti (1990). 57 3.3. Satisficing Wings and Propellers engineering is instead viewed as an activity which must occasionally produce new and original scientific knowledge independently from working scientists in order to adequately design and construct artifacts. Engineering design, in particular, is one part technique and the other part science: it is the process of gradually transforming a vague and abstract design problem – like building a cheap personal computer – into a well-defined hierarchy of more manageable sub-problems, problems which may require engineers to produce new knowledge and technical know-how in order to solve.86 And as new technologies emerge, failures occur or aesthetic tastes or risks change, engineers will have to modify and even replace the components of this hierarchy in order to produce a satisfactory engineering product. The very notion that an engineering design could be "correct" is illusionary – "correctness" is a moving target which requires a continual and piece-meal process of finding better ways of solving a protean problem. At the end of this chapter I argue that it is this piece-meal and hierarchical conception of engineering that provides the appropriate interpretive framework for understanding the revolutionary features of Carnapian Wissenschaftslogik. But before I say any more about either this conception of engineering or Carnap, I first discuss in each of the next two sections case studies from the history of engineering design. 3.3 Satisficing Wings and Propellers How should one design an aircraft? Engineers, of course, don't typically start off with such basic questions: they get hired to design particular components of aircrafts which are intended to fulfill very specific tasks while satisfying any number of economic, environmental, safety or legal considerations. But employers themselves typically do not tell engineers how this imagined aircraft can be turned into reality using the resources and technologies already available to engineers. A company like Boeing, for example, may decide to build a new line of jetliners that will have a large wing-span, fits X many passengers, has Y cubic meters of cargo space and that is cheaper, more fuel-efficient and is easier to maintain compared to the kind of planes already in operation at most major airlines. From an engineering point of view, the task of building this new line of jetliners is an "ill-posed" or "ill-structured" problem: the problem itself offers 86 The idea that engineering design problems are hierarchical comes from Vincenti (1990); see below. 58 3.3. Satisficing Wings and Propellers little guidance regarding how one should make any number of important design and production decisions, like what overall design or archetype of the plane should be used, how many engines it should have, or how to solve any number of more specific design decisions, like how to design the wing airfoils and the fuselage for the body of the plane.87 Figure 3.2: Hierarchical Model of Engineering Design and Knowledge. The figure is my own but the engineering terminology is taken directly from Vincenti (1990); specifically the two lists on right-hand side of the figure are cited verbatim directly from Vincenti. For the details about each level of the engineering design hierarchy see p. 9, for the list of engineering kinds, or "categories", see pp. 208 ff. and for the list of knowledge-generation activity see pp. 229 ff. There are even further questions regarding how to design, construct and manufacture the components of the aircraft, like what kind of materials should be used to construct the wings or even more "mundane" questions regarding what kind of rivet should be used.88 The result is a hierarchy of problems for which any given solution to one problem may have consequences, both practical and theoretical, for the other problems (e.g., adding a more powerful engine will 87 Vincenti (1990) uses the terminology of an ill-structured versus a well-structured problems, which he in turns takes from Simon (1973). According to Herbert Simon this terminology was first used by W. R. Reitman in the 1960s. 88 Vincenti (1990), for example, spends an entire chapter talking about the difficulties of finding an appropriate method to install rivets that are flush with the body of an aircraft. 59 3.3. Satisficing Wings and Propellers not only raise the production costs of the jetliner but engineers will also have to re-examine the structural supports for the wings and fuselage). Vincenti calls this a design hierarchy, a pictorial representation of which is given on the left-hand side of Fig. 3.2. Not only do changes to the "top" of the design hierarchy, like the project definition, reverberate down to the design questions at the lower levels, but as engineers fail to find satisfactory solutions to the questions at the "lower" levels, or new knowledge is produced or technical tools discovered, engineers may find it more convenient or efficient to alter the project definition itself. With a nod toward Thomas S. Kuhn's distinction between normal and revolutionary science, Vincenti distinguishes between normal and radical design (Vincenti, 1990, 7-9; see Kuhn 1962). While instances of radical design changes include decisions to switch the design of an aircraft based on Boeing's 747 "Jumbo Jet" to an airship like Deutsche LuftschiffahrtsAktiengesellschaft's Graf Zeppelin, normal design changes are more modest: the overall archetype of the plane remains fixed but the components or sub-components of the aircraft design are altered. Most everyday engineering is concerned with such "normal" design problems and, because when Carnap is working on inductive logic he is arguably worried more about "normal" design problems than not, I focus exclusively on "normal" design.89 But what does it mean to say that the archetype of a design remains fixed? Vincenti borrows Michael Polyani's notion of an operational principle as a way to codify the requirements, purposes and goals that an engineering design, and ultimately the physical objects modeled on that design, are intended to satisfy.90 Thus the operational principle for an airplane, or some other kind of object, provides an operational or functional definition of what kind of airplane the object should be. According to Vincenti, it is the operational principle that provides the criterion by which success or failure is judged in the purely technical sense. If a device works according to its operational principle, it is counted as being a success; if something breaks or otherwise goes wrong so that the operational principle is not achieved, the device is a failure. (209) Airships and jetliners have different operational principles: they are designed to be successful or to fail in a number of different ways and their respective operational principles set the standard, 89 Here I would suggest that examples of normal design change for inductive logic would be changing how one defines semantic confirmation functions while a radical change would be to replace the semantic concept of degree of confirmation with another concept entirely, like providing a semantics for how scientists use the word "confirmation" in natural language. 90 See Polanyi (1958, 176 and 208). 60 3.3. Satisficing Wings and Propellers so to speak, against which engineers and their employers can measure the success and failure of the final engineered product. Consequently, normal design, for Vincenti, concerns engineers working with both a similar operational principle and also a tacitly agreed upon "normal configuration" of the artifact - i.e. "the general shape and arrangement that are commonly agreed to best embody the operative principle" (209). In other words, the operational principle fixes the task of designing the object while a tacit "normal configuration" operates in the background to regulate how the actual physical objects are manufactured and produced. Below we will see that practical and theoretical problems arise when an operational principle and a "normal configuration" of an artifact conflict; the way we design objects in the drawing room does not always smoothly transfer over to the machine shop, and vice versa: there is always a certain dynamic or interplay between the design and construction of the engineering artifact. One example of how the design of different sub-components of an aircraft may have conflicting practical and theoretical considerations is nicely illustrated in chapter three of Vincenti (1990). How should the flying qualities of an aircraft – i.e. "those qualities [. . . ] that govern the ease and precision with which a pilot is able to perform the task of controlling the vehicle" – influence the design of the aircraft?91 These qualities, says Vincenti, "are thus a property of the aircraft, though their identification depends on the perceptions of the pilot" (53). In particular, Vincenti is interested in two kinds of flying qualities.92 The first quality is the physical control the pilot has over the aircraft using, typically, a stick and pedals to mechanically move both the flaps on both the wings and the horizontal and vertical tails of the aircraft. The design of these controls determines both how well the pilot can control the aircraft in order to achieve their plans and objectives and how much control the pilot perceives they have over the aircraft; in Vincenti's words, "[t]he effort required by these tasks gives the pilot a feeling of confidence or apprehension about the airplane" (53). The second quality concerns the inherent stability of the aircraft which93 has to do with the ability of an airplane, by aerodynamic action alone and without any corrective response by the pilot, to return to an equilibrium flight condition after a transitory disturbance, as might arise, for example, from a gust. (54; emphasis in original) 91 Vincenti 1990, 53. 92 To make this discuss of flying qualities manageable, Vincenti only focuses on longitudinal flying qualities. 93 Stability is defined in terms of an equilibrium of particular measurable properties of an airplane (Vincenti 1990, 59). 61 3.3. Satisficing Wings and Propellers The more stable the aircraft, the less likely it is to deviate from a flight path due to external disturbances. But that means the pilot must put in more effort to perform aerial maneuvers which deviate from the current flight path of the aircraft and thus the pilot may feel like they have little control over the behavior of the aircraft. According to conventional engineering wisdom, "[i]nherent stability," says Vincenti, is important to flying qualities because the stable airplane resists initiation of a change in flight condition to more or less the same degree as it does a transitory disturbance. The unstable airplane, by contrast, responds readily, even perhaps excessively, to movement of the controls. Stability and control thus work at cross purposes, and the ease and precision with which a pilot can control an airplane depend as much on its stability characteristics as on the action of aerodynamic control surfaces. As they relate to flying qualities, stability and control are different sides of the same coin. (54) The notions of stability and control of an aircraft, then, are fairly straight-forward theoretical notions which can be studied using a variety of mathematical and physical methods. More difficult to quantify, however, is the pairing of the subjective experiences of pilots with specific combinations of the flying control qualities: this is in part a practical problem which is sensitive to the preferences and expectations of different kinds of pilots. What does it mean, for example, in the technical vocabulary of control and stability, when a pilot says that a plane feels "sluggish" when making tight turns? This is a problem for engineers: given that engineers are tasked with designing military aircraft which will allow pilots to effectively and efficiently perform combat operations and maneuvers, how should they quantitatively measure the qualitative judgments of pilots and then use these qualitative reports to coordinate specific stability/control qualities of the aircraft with the expectations of experienced pilots? Once these questions are addressed the design engineer can then better tackle questions about how much stability is too much stability. Crucially, this trade-off between control and stability did not arise because of some economic or theoretical constraint but rather, says Vincenti, [...] it came into being because of the practical needs and limitations of the human pilot. The balance therefore could not have been achieved on purely intellectual grounds and without extensive flight experience. It summarized a practical design judgment (based in this case on subjective opinion) of a sort that cannot be avoided in engineering. (107) Starting in 1918, a group of engineers working at a new laboratory at Langley Field in Virginia for the National Advisory Committee for Aeronautics first started to measure the subjective experiences of pilots.94 Using new measuring technologies (like an altimeter, tachometer 94 For those interested in the historical details, see chapter three where Vincenti discusses how, before 1918, 62 3.3. Satisficing Wings and Propellers and airspeed meter, p. 70), the engineers at Langley worked closely with test pilots to try to quantify the subjective experiences of pilots while simultaneously recording the quantitative results of these measuring instruments. The result was the creation of a quantifiable method for measuring the control of an aircraft as a function of both stick-fixed stability (stability when the control joystick is held fixed by a pilot) and stick-free stability (the stability when a pilot releases the stick) (68 ff.). In 1936 Edward Warner, who was then an engineer working for the Douglas Aircraft Company, wrote a report of which Vincenti says, "embodied for the first time the notion that desired subjective perceptions of pilots could be attained through objective specifications for designers" (81). Here is how Vincenti summarizes the results of these historical events: The road from the recognized but ill-defined problem of 1918 had been a long and complicated one. The idea that subjective pilot preferences could be embodied in objective design requirements, itself the product of a decade and a half of learning, had been validated by producing a set of requirements that accomplished that job. From here on, the problem of flying qualities was conceptually a different ball game. Research engineers could now devote themselves to refining and extending the requirements with confidence that the idea was useful. Designers at the same time had a greatly improved understanding of what was wanted in flying qualities and explicit specifications at which to aim. They didn't always succeed, of course; knowledge of how to design a given requirement still left much to be desired. [...] Their problem now, however, was mainly one of designing (i.e., proportioning) the airplane rather than deciding at the same time what to design for. (97) This quote, I suggest, is indicative of a distinction between the practical and theoretical that is central to the activity of engineering design. For any piece of machinery or technology there is of course a wide array of theoretical facts detailing what will, can and should happen to that artifact under many different kinds of conditions: the stability of an aircraft with a particular fuselage, for example, is a property we can infer from our knowledge of aeronautics. Practical considerations, nevertheless, play an important role in deciding which fuselage to use: here the subjective preferences of pilots will play a nontrivial part, along with economic or strategic factors, in deciding which fuselage to use in the design of the aircraft. There is, in a sense, a dynamic interplay, or feed-back loop, between the practical needs of pilots and military organizations and the engineering knowledge generated by aeronautical engineers at places like Langley airfield. Once theoretical results were found which could satisfy most of the practical demands of pilots, the original ill-structured problem of balancing control and stability becomes certain engineers on both sides of the Atlantic emphasized control or stability, or vice versa, until the early 1920s, when engineers realized that both stability and control were required, especially for military aircraft; also see Bloor (2011); Gibbs-Smith (1960; 1966). 63 3.3. Satisficing Wings and Propellers more tractable. Of course, whether these practical demands were met, viz. whether the tradeoff between control and stability used in various designs of military aircraft were ultimately successful, is a question that can only be answered when active military pilots actually use the aircraft: Though conformity with the quantitative flying-quality specifications can also be measured in flight, the final test there remains the pilot's subjective reactions. The flying-quality specifications retain their function as means – a design guide – and resist becoming an end. [...] Thus, for the designer, the quantities set down in performance specifications are themselves objective ends; the quantities prescribed in specifications of flying qualities are objective means to an associated subjective end. (100) Only after much engineering trial and error was it possible for design engineers to transform the practical considerations of pilots into theoretical constructs which could be written down in blueprints required to manufacture and produce aircraft that balance control and stability. But this process was not a simple piece of means-end reasoning: engineers had to re-think, on different occasions, how to define what it meant to quantify the subjective experiences of having "control" over an aircraft. Next I discuss another example from Vincenti (1990) about the work by two aeronautical engineers, William F. Durand and Everett P. Lesley, who designed and empirically tested aircraft propellers in the 1910s and 1920s. From a certain perspective, the problem of designing propellers is a simple optimization problem: after constructing models of different kinds of propellers, one simply has to find some way to quantify the relevant properties of these propellers and then test them, e.g., in a wind tunnel, until an optimal propeller design is found. In other words, each propeller kind belongs to some point in a state space S and all we have to do is evaluate each point in S in terms of some utility function U and then use linear programming, or some other optimization method, to find that point s in S such that U(s) is the maximum value of U (restricted to S). In practice, however, this optimization process is not so clear cut: sometimes what matters most to an engineer is finding "good enough" states in S which comes "close enough" to a maximum value of U . As it turns out, it is both expensive and difficult, if not impossible, to build and test all possible propellers characterized by a point in S. Instead, engineers must randomly test a finite point in S and they must make these tests using miniaturized and scaled propellers. Specifically, 64 3.3. Satisficing Wings and Propellers engineers must appeal to what Vincenti calls the method of parameter variation, which is95 the procedure of repeatedly determining the performance of some material, process, or device while systematically varying the parameters that define the object of interest or its conditions of operation. (139) Engineers also need to depend on some theoretical law of similitude in order to extrapolate the performance of a full-scale propeller from the performance of a smaller scale model propeller where the measure of performance for the scale-model is a special kind of quantitative magnitude called a dimensionless group.96 Using the method of variation engineers can classify together similar propellers as a single propeller design the performance of which can then be measured, in virtue of the law of similitude, as a function of distinct quantities measured experimentally using a wind tunnel. Some function of these quantities can then be shown to form a dimensionless group, providing a measure of the performance of propeller designs. Then the engineer can try to maximize the value of this measure over all possible propeller designs. However, the question of what exactly should be optimized is not trivial. Propellers work, basically, by transferring the rotative power of the engine into a propulsive, forward movement power and so the "success" of a propeller can be understood in terms of how mechanically efficient a certain propeller is at transferring rotative into propulsive power (141). Thusly, as Vincenti clarifies, the question of whether a certain propeller design is successful or not depends on the prior choices which have been made concerning the engine and the aerodynamical features of the wings and fuselage of the aircraft as these are the sort of properties which would causally effect the forward movement of the aircraft (141). Moreover, even though engineers had known how to design propellers in terms of a finite number of parameters, like the mean pitch ratio of a propeller, in the 1910s there was no systematic collection of empirical data about the efficiency of propeller designs, nor was there any systematic theory or mathematical model for how the efficiency of different propeller designs were related to each other.97 Thus there was no prior theoretical basis to which engineers could appeal in order to claim that certain kinds of propeller 95 For a brief history of this method, see pages Vincenti, 1990, 138-141. 96 Vincenti defines a dimensionless group as "a mathematical product of two or more quantities arranged such that their dimensions (length, mass, and time, or combinations thereof) cancel, leaving a "pure number," that is, a number without a dimension" (140). 97 The mean pitch ratio of a propeller is defined by Vincenti as "a measure of the angular orientation, relative to the plane of propeller rotation, of the blade section at some standard representative radius" (148). In other words, it is a measure of how much the blades of a propeller are "twisted" relative to the vertical axis parallel with the front of the aircraft. 65 3.3. Satisficing Wings and Propellers designs would in fact be more or less efficient with different kinds of aircraft designs; indeed, the initial theorizing about air propellers was done by analogy with the work by naval engineers on marine propellers.98 In 1916, the NACA funded a preliminary study to empirically test a limited number of propeller designs by building an air tunnel at Stanford University.99 The design of a propeller was understood in terms of five shape parameters, whose values I denote by r1, r2, r3, r4, r5, one of which is the mean pitch ratio; I here represent these values as a vector r (= ⟨r1, r2, r3, r4, r5⟩). For the 1916-17 measurements, only forty-eight propellers were tested (3 different values of the mean pitch ratio and two other values for the other four parameters comes out to 48 distinct possible propeller designs). Three other parameters were also included in the mathematical model for the efficiency, or performance, of a propeller used by the engineers Durand and Lesley: V is the forward speed of the aircraft while D is the diameter and n the revolutions per unit time of the propeller (146). The result is a model of the performance of a propeller in terms of a function F of V , n, D and r. Several different empirical measurements were then made for each of the specially-constructed three-foot model propellers in the wind tunnel while the values of V and n were simultaneously varied. A law of similitude, based on the earlier work of the Parisian structural engineer Gustave Eiffel,100 was then used to measure the efficiency of each propeller, η, as a dimensionless group based on the ratio V/nD. The resulting equation for the performance of scale-model is this (150): η = F ( V nD , r ) . The advantage of this simplified equation, according to Vincenti, is that, for any particular kind of propeller represented by some vector r we only need to plot the η values against the values of the dimensionless group V/nD, or what Vincenti calls efficiency curves, in order to figure what when η is maximized. Simplifying the problem a bit, for any full-scale propeller with shape and diameter parameters r∗ and D∗, all we need to do in order to calculate the efficiency of the propeller with the 98 See Vincenti, 1990, 141-2. 99 For more details see Vincenti, 1990, 142-159. 100 See Vincenti, 1990, 142; 151. 66 3.3. Satisficing Wings and Propellers parameters r∗ and D∗ is to find, for any values n and V , the value of V/nD (for the scale-model propellers) that maximizes η - call it ∆V/nD. Then the most efficient values of V ∗ and n∗ for the full-scale propeller design r∗ and D∗, are all those empirically feasible values of V ∗ and n∗ (that is, feasible in terms of the specifications of the Stanford wind tunnel) such that the following equation holds: V ∗ n∗ = ∆ V nD ×D∗. Notice first that the resulting mathematical model is an example of the kind of empirical knowledge engineers produce on their own, independently of collaboration with scientists; this is an example of why engineering cannot be easily assimilated to mere means-end reasoning.101 Second, notice that this example offers philosophers a glimpse at how difficult engineering can actually be: even for reasonable values of r where each ri only has X possible values, there are exactly X5 many possible propeller designs which would need to be constructed (as scale models) and empirically tested (with co-varying values of V and n) in order to explore then entire "space" of propeller designs; if X = 8, for example, then engineers would have to perform over thirty thousand tests – one for each different kind of propeller – in the wind tunnel. Notwithstanding their resolve, few engineers would have the time, funds or resources available to them required to test every single possible propeller design for large values of X.102 For very large, multidimensional state-spaces, engineers have to find some theoretical crutch, like a law of similitude, which would allow them to reduce the number of possible solutions that need to be tested. Thus even though in later tests Durand and Lesley extended the number of shape parameters, including three values of the mean pitch ratio, to nearly eight thousand propeller designs they did not build and test the required eight thousand scale models; instead, as "judicious sampling became necessary," they only constructed an additional fifty scale propellers which they then added to their study (151-152).103 101 Or rather, engineering is no more simply understood as means-end reasoning than most of the empirical sciences; also see Vincenti (1990, 160-6). 102 Moreover, even for analytic or computational approaches to the problem, depending on how large the problem space is there is no guarantee that an optimal algorithm exists which will solve (if ever) the problem in linear time (e.g. using linear programming algorithms to traverse the problem space). Instead one would need to turn to certain "sub-optimal" algorithms which will differ in their respective benefits and costs (e.g. see Simon, 1996). 103 Of course, similar problems of simulation and optimization crop up in the empirical sciences, especially physics and biology. Relevant here is the discovery of Monte-Carlo methods to "randomly" search through state spaces in physics; e.g., see Galison (1997). 67 3.4. Changing Designs and Braking Barriers After amalgamating their various reports into a single, more comprehensive, report the result is the discovery of the following empirical generalization: To optimize propeller performance at a fixed flight condition, one value of pitch ratio will suffice. The designer need only calculate the value of V/nD for that condition and select from the data the pitch ratio giving maximum efficiency at that value (interpolating between curves if necessary). (152-3; see figure 5-5 in Vincenti 1990, p. 153) The result is a certain trade-off between propeller designs: although an aircraft with a certain propeller will be more efficient at higher speeds, i.e. for high values of V , the same aircraft may be less efficient for lower speeds because the propeller is less efficient at lower values of V . Thus because the values of n and V vary all the time during normal flight conditions the most efficient propeller is not a propeller fully specified by some value of r at all: instead it would be a propeller designed so that the mean pitch ratio of the propeller could be modified in flight. Although, as Vincenti notes, Durand and Lesley did provide a model in 1918 for such a propeller, the technology required to construct so-called variable-pitch propellers only became available in the 1930s (153). But even with the discovery of the variable-pitch propeller solution, the aeronautical engineer can still provide no guarantee that a more efficient propeller design doesn't exist: with the emergence of new technologies and the growth of engineering knowledge there are infinitely many possible ways in which the design of the propeller could be changed, both radically and otherwise, to help maximize performance – especially as the design of airplanes themselves change. 3.4 Changing Designs and Braking Barriers I next turn to a case study from the history of automotive engineering which offers a better illustration of how engineering design problems can change over time; specifically, how design problems can transition from vague operational principles to well-formed technical and mechanical problems. This section elaborates on this point from the vantage point of the history of the development of anti-lock braking systems for automobiles as found in Johnson (2009). Specifically, Johnson argues that engineering knowledge is developed co-extensively with the modifications of engineering communities, communities which are in turn formed around a volatile "attractor," or a "communally defined problem" (5).104 In particular, she discusses how anti-lock 104 Importantly, Johnson's "attractor" framework is not a theory about how engineering communities form in the first place, but how disparate engineers break off and cluster around a problem, an attractor, and how 68 3.4. Changing Designs and Braking Barriers braking systems (ABSs) were developed to reduce the problem of vehicular skidding from the 1950s to 1980s. After the Second World War the initial problem was to figure out how to mitigate the rising number of accidents and deaths due to the increase in the number of automobiles on North American roads (2009, 26). As Johnson points out, unlike possible socio-political interventions like preventing drinking and driving through better driving education, preventing skidding was a distinctively mechanical solution to the problem of mitigating accidents which could be tackled head-on by engineers (26). Thus the original problem of curtailing accidents quickly morphed into the more tractable problem of increasing the safety of vehicles by designing (preferably, profitable) automobiles that are less likely to skid. As Johnson illustrates in her book, however, providing a solution for how to reduce skidding turns out to be, from both a technical and conceptual standpoint, a very complicated problem. First, there is the issue about how to define the problem of skidding: is it just a problem about the change of the coefficient of friction between a tire and the road, or is it a more holistic "interaction problem" between the car, the driver, tires and road?105 Without a well-defined statement of the problem, it is difficult to articulate a space of possible designs from which engineers can entertain which designs best satisfy the constraints of the problem.106 In other words, the engineers lacked an operational principle: only once a clear statement of the problem, and definition, of vehicular skidding was given could any sort of technical solution be proposed and implemented. Second, there is the issue of knowing what kind of instruments and tools can or should be used to help solve the problem. Johnson stresses, for example, that automotive engineers had the genuinely difficult problem of measuring, in real time, the deceleration of a tire (and, moreover, the simultaneous measurement or calculation of deceleration for all four tires) in order to provide a real time measurement of skidding. It wasn't until the 1980s that digital sensors were able to provide the reliable, precise and, most importantly, real-time measurements required to estimate exactly when tires are skidding. Third, even if one could adequately measure the deceleration of a tire in real-time, there is still the problem of designing a mechanical system these engineers form a new, sub-community engaged in finding solutions to the attractor (4-6). Immigration from other engineering communities is especially important because "[n]ew ideas and tools move into the community in part because participants move between communities, and ideas require human vectors" (6). 105 See Johnson, 2009, 42–4. 106 See Johnson, 2009, 105–7. 69 3.4. Changing Designs and Braking Barriers which can, in real-time, modulate the brakes (which turns out to require many modulations per second) in order to change the friction coefficient of a tire and thus, ultimately, prevent skidding. Moreover, such mechanical systems have to modulate the brakes relative to realtime measurements of the other tires: any such system has to not only perform simultaneous measurements on each tire, but it also has to compare and perform calculations on measurements in real-time in order to help determine how to modulate the brakes for each tire. The upshot of this for understanding engineering, according to Johnson, is that "[a]t its core, ABS is a system for measuring, comparing, and responding" (80). Moreover, the story of how, at the end of the Second World War, it came to be the case that there were no engineers that specialized in automotive anti-skidding technology to the state of affairs in the 1980s, when there were engineers who specialized entirely on ABSs, is a complicated matter. Indeed, Johnson argues that in order to understand how the engineering knowledge concerning ABSs was developed, we have to look at how the skidding problem - along with skidding measuring instruments and technology - changed within separate communities of engineers and how various engineers from other communities become professionalized into a community of engineers with various kinds of expertise which focused on the problem of skidding. For example, Johnson emphasizes that, in several instances, aeronautical engineers had to initially provide their expertise concerning the braking systems of aircraft to those working on automotive skidding; in fact, sometimes these engineers migrated entirely from the aeronautical to automotive engineering communities. More specifically, Johnston explains how anti-skidding devices were already developed for disc-brakes for aircraft after the Second World War and it was in Great Britain that these devices were hastily modified to work for automobiles. Unfortunately, not only were these early devices unreliable but they were also very expensive and thus were not mass-marketable.107 The problem of how to measure skidding was first tackled in Britain during the 1950s. The story Johnson tells is complicated, but the basic point is that there was, in general, disagreement on how to design the instruments for measuring torque or angular velocity of tires, called dynamometers, as a means to calculate changes in deceleration in tires. These disagreements 107 Only expensive cars, like Rolls-Royces and Jaguars, used disk brakes in the 1950s, whereas cheaper cars used drum brakes; disc brakes only became less expressive in the 1960s (Johnson, 2009; 49-53, 70). 70 3.4. Changing Designs and Braking Barriers were generated because of differences in opinion concerning how automobiles work in the first place. Initially there were disagreement about whether laboratories or road tests should be used to test different braking designs (79; also see chapter five in Johnson 2009). It was during this time that some British engineers made, according to Johnson, the often "fatal" assumption that "all the wheels decelerated at the same rate" (66). It was not until the advent of electronics that skidding could be reliably measured but even this was at first done using analog electronics which depended on vacuum power (76-77).108 Whereas British engineers tended to treat anti-lock braking systems as a modification to an already extant braking system, it was American engineers, working at companies like Ford and Chrysler, that designed braking systems to include ABSs. Specifically, Johnson talks about the kinds of competing design decisions engineers had to make concerning the design of a particular ABS, called "Sure-brake," at Chrysler and Bendix.109 The sort of design questions about the "Sure-brake" system are the kinds of questions best understood as being made relative to the design hierarchy for an automobile and not just a design hierarchy for a braking system. For example, consider the question about whether there should be sensors on just two or all four wheels. Two-wheel designs are cheaper, but are less effective. Four-wheel designs, on the other hand, are more expensive but offer better performance. There is also the choice engineers have to make about whether sensors for measuring angular velocity should be mechanical or electric (using analog controls).110 Thus, the original skidding problem - which was originally a problem about braking systems - is now a problem about how to design automobiles with an ABS. These ABS designs, however, could, and did, fail because of the unexpected physical limitations of the technology involved. For example, the analog sensors used for the SureBrake ABS turned out to be unreliable: salt on the roads corroded wires and radio/TV towers interfered with the analog sensors.111 Yet such failures are not always debilitating; in fact, failure is an essential part of engineering design – it weeds out those designs which are impracticable 108 And so after laws were passed in the United States stating that all cars should have Catalytic converters, there was a reduction in the amount of vacuum power that could be allocated to analog electronics and thus engineers had to re-think how braking hydraulics should be powered (Johnson, 2009, 117). 109 See Johnson, 2009, 111 ff. 110 See Johnson, 2009, pp. 112-13 for a list of requirements engineers decided the Sure-Brake ABS should meet. 111 The sensors, Johnson reports, were discovered to fail because of "nighttime test drivers [playing] the radio while driving" (2009, 114). 71 3.4. Changing Designs and Braking Barriers or impossible.112 There are no a priori guarantees regarding which designs won't fail: instead, engineers have to try out particular designs in order to acquire expertise and know-how in order to come up with new designs will have better expectations of success. Despite the various attempts to better design ABSs like Sure-Brake ABS in the 1970s, these systems were never an economic success. The cars were just too expensive; in fact, the first inexpensive and integrated ABS was only first introduced into North American mass markets with the 1983 Ford Scorpio (133). The major breakthrough which allowed for the proliferation of inexpensive ABSs happened on the other side of the Atlantic in places like Sweden, France and West Germany: by the 1970s digital electronics had become much less expensive and many European engineers were gaining expertise in computer programming languages, like FORTRAN, which were necessary to implement the algorithms for performing the calculations required to compare the measurements from electronic dynamometers (98). Moreover, new technologies (again, from aeronautical engineering) made their way into automotive engineering. In this case, high-speed valves used for airplane instruments were borrowed by automotive engineers to quickly modulate brakes; these new values could modulate brakes around 60 pulses per second – quite a dramatic increase over the 4-6 pulses per second used by earlier ABSs (122-3). The upshot is that each tire could have its own high-speed brake-modulating device, devices which could then be controlled using digital circuits. Moreover, these new technologies opened up new design possibilities: expectations were adjusted concerning what was theoretically and practically possible. For example, Johnson claims that these engineers argued their "system was derived from theories of tire friction" and as such [t]heir design goals were aimed at realizing what was theoretically possible according to theories of vehicle and tire dynamics, rather than seeking simple improvements over existing braking system technology (124). The problem of skidding had shifted: due to new technologies, it was possible to measure wheel slippage directly from changes in tire deceleration while simultaneously modulating brakes at very high speeds to prevent and not just correct for skidding (2009, 125-135). This was a technical achievement. The result was a piece of engineering knowledge. The relevant point of this history of ABSs is this: Johnson's historical work and analysis allows us to see how a specific engineering problem, an "attractor," has changed over time as 112 See Petroski (1992). 72 3.5. Herbert Simon and Satisficing engineers create new technologies and reformulate the current problem into a new problem made tractable by the new technologies. The original engineering problem of making cars safer by trying to prevent them from skidding morphed into a more well-defined problem: namely, the problem of figuring out how automobiles should be designed from the ground up so that they have a very specific and cheap kind of anti-lock braking system, i.e. an ABS built using a host of new technologies like quick-pulsating values and electronic sensors. Moreover, these changes constitute a continual redefinition of the central operational principle at the heart of braking designs: the conditions of success and failure for braking systems changed as the technologies and practical limitations (like economic success) changed. 3.5 Herbert Simon and Satisficing Engineering problems, like those we just saw from the history of aeronautical and automotive engineering, are rarely first articulated in the form of what the computer and social scientist Herbert Simon called "well-structured problems" (WSPs); namely, as problems sufficiently specified so as to make the finding of their solution obvious using some general method. Instead, most problems begin their life as "ill-structured" problems (ISPs) – as the "residual concept" of "a problem whose structure lacks definition in some respect" (1973, 181). Simon explains the transition (which he admits is a relation of degree rather than kind) from an "ill" to a "well" structured problem as relative to a procedure for solving problems, whether it be the cognition of humans or an algorithm: a WSP is an ISP which has been reformulated, codified and altered so that it is now a well-defined problem for a specific problem solver (186).113 What is crucial to recognize is that there are at least as many ways of transforming an ISP into a WSP as there are ways to solve problems and that each such way provides us with a different perspective for how to visualize, so to speak, the potential layouts for the internal logic of an ISP. Humans and computer programs can be trained (at least for the case of machine learning algorithms) to use heuristic reasoning to play chess but they will rarely formulate and 113 Simon was motivated in this article to explain how his General Problem Solver could possibly solve ISPs. Simon's work on artificial intelligence, however, is conceptually linked with his notions of "satisficing" and bounded rationality from his work on decision theory: his earliest work with Allen Newel on a program that finds logical proofs for Whitehead and Russell's Principia Mathematica, for example, found proofs by using "heuristic" rules to find "good enough" but not necessarily optimal proof solutions; e.g., see Newell and Simon (1956). For more on Simon's notion of a "bounded rationality," see Simon (1957; 1996). 73 3.5. Herbert Simon and Satisficing implement the same heuristics in the same kind of way. The method we use to solve a problem changes how we conceptualize and reformulate that problem. Consequently the task of figuring out how best to optimize an ISP is not always obvious – but most of the time we just need to find some way of transforming the ISP into a WSP whose solution is "good enough" for the task at hand. Instead of finding the globally optimal solution, e.g., in terms of finding the maximum and minimum values of some expected utility function over a complete space of possible states of the world, we instead "satisfice" by scaling back the requirements for what it would mean to find an acceptable solution, like calculating one's expected utilities over a limited set of plausible states of the world.114 "An earmark of all these situations," later says Simon, where we satisfice for inability to optimize is that, although the set of alternative alternatives is given in a certain abstract sense (we can define a generator guaranteed to generate all of them eventually), it is not given in the only sense that is practically relevant. We cannot within practicable computational limits generate all the admissible alternatives and compare their respective merits. Nor can we recognize the best alternative, even if we are fortunate enough to generate it early, until we have seen all of them. We satisfice by looking for alternatives in such a way that we can generally find an acceptable one after only moderate search. (1996, 120) We have already seen several examples of how engineers can, firstly, transform a design problem into a hierarchy of more tractable and technically feasible engineering problems – this is the transition from an ISP to a WSP. Secondly, we have seen how engineers, as with the case of propeller design, must settle with a "good enough" solution – they satisfice rather than impractically search for globally optimal solutions. Of course, I do not mean to suggest that satisficing or even the problem of design are distinct to engineering – scientists trade in these concepts and issues too.115 But whereas the computer scientist runs up against the mathematical limitations of computability and complexity theory or the behavioral economist the limited reasoning capabilities of decision makers, the design engineer has to make due with the limitations imposed by the results of the empirical sciences, contemporaneous engineering knowledge and the practical needs of their firms and companies. 114 Also see chapter 14 of Simon 1957. 115 The distinction between scientist and engineer is frequently blurred with the advent of large-scale, cooperative projects like the construction and operation of the Large Hadron Collider (also see Petroski 1992, esp. ch. 4). Vincenti (1990) also cites the example of one Irving Langmuir who received a Nobel Prize in chemistry for his work at General Electric's Research Laboratory (227). Moreover, it seems reasonable to suggest that engineers and scientists routinely coordinate and transfer between them the knowledge required to produce the technological discoveries and products associated with R&D labs and companies like Bell Labs, RAND, Xerox, Hewlett-Packard, Microsoft and Google. 74 3.6. Carnap as Conceptual Engineer Nevertheless, while scientists are primarily concerned with getting at the truth, explanation, prediction or cultivating any number of scientific virtues, design engineers have an impressive degree of creative freedom afforded to them by the initial vagueness of engineering problems to find effective, but not necessarily optimal, ways to make something happen in the world according to some plan or schema.116 Or as Vincenti puts the point: In general, all knowledge for engineering design (as well as for the engineering aspects of production and operation) can be seen as contributing in one way or another to implementation of how things ought to be. That, in fact, is the criterion for its usefulness and validity. (1990, 237) Engineers and their clients are not bound by how things are but only by how things could possibly be: they are free to change the measure – by modifying the operational principle, the "ought" – by which engineering designs as evaluated as better or worse. In this sense, design engineering is as much a practical as it is a theoretical activity. 3.6 Carnap as Conceptual Engineer The last section provides an interpretive framework, or a working analogy, which I use throughout the rest of the dissertation to describe Carnap's work on inductive logic.117 The language of explication, for example, parallels Simon's language of "ill-structured" and "well-structured" problems: the philosophical problem of clarifying and systematizing an explicandum like the logical concept of probability is analogous to an "ill-structured" problem whereas the use of both syntax and semantics as tools to construct an explicatum like a quantitative concept of degree of confirmation is analogous to the use of problem-solving tools and methods to formulate a "well-structured" problem. When a pure inductive logic is applied for use in the empirical sciences, like theoretical statistics or information theory, an inductive logic qua logic may have to be redesigned and extended by the logician to better meet the demands of these sciences: this 116 Of course, the same could be said for many experimental scientists, especially those actively engaged in the designing of experiments and scientific instruments, see Baird (2004); Radder (2003). 117 Of course, the engineering analogy is just an analogy: there will always be dissimilarities between explication and engineering projects. Nevertheless, I talk of analogies instead of metaphors in part due to the work of Mary Hesse who argues, in Hesse (1966), that at least for analogical reasoning using models in scientific contexts, an analogy between X and Y includes three kinds of components: how X is similar to Y, how X is dissimilar to Y and, most importantly, there are open (typically empirical) questions about whether specific parts of X is similar or dissimilar to different parts of Y. According to Hesse, it is this third component which is most important for analogical reasoning in the sciences: as we learn more about the "open questions" about either X and Y we will come to learn, by analogy, more about the other. For a more detailed treatment of analogical reasoning, see, e.g., Bartha (2010). 75 3.6. Carnap as Conceptual Engineer is analogous to how the operational principle of a design hierarchy of problems can change over time, especially as new technologies emerge or with the increase of our scientific or engineering knowledge. Lastly, there is for Carnap no "correct" logic – there is no "correct" explication of a concept of logical probability, or inductive reasoning more generally, but only better or worse explications: this is analogous to engineers who satisfice rather than optimize. Explication is for Carnap the on-going, gradual, process of improvement of a system of concepts designed specifically for clarifying the logical structure of scientific theories and concepts.118 This is a kind of conceptual engineering. But conceptual engineering differs from most professional engineering insofar as concepts need not be "tested" empirically. Indeed, for Carnap pure inductive logic is not tested directly but instead we must stipulate our own requirements and restrictions – our own operational principle – for what to means for an applied inductive logic to be successful. A logic can then be designed from the ground up to serve any number of scientific purposes just as an aircraft can be designed to serve any number of industrial or military purposes. The guiding idea for why this hierarchical account of engineering design is an appropriate interpretive framework for explaining and clarifying the philosophical significance Carnap himself assigned to his technical projects is that, as we saw in the previous chapter, he conceives of logical syntax and semantics as tools or instruments chosen not because of their correctness or due to some highly theoretical process of justification but rather because of their expected capacity to satisfy our intellectual ends. In the previous chapter we discussed a number of Carnap scholars who have already explained Carnap's mature philosophy method, with an emphasis on Wissenschaftslogik as spelled out in LSL, through the lens of conceptual or linguistic engineering. In this section I explain how this hierarchical account of engineering can accommodate many of the insights made by these scholars while providing an original take on how Carnap could be understood as a conceptual engineer. Before we move on, however, I want to make it perfectly clear that I am not claiming that Carnap understood himself as a conceptual engineer in the above hierarchical sense of engineering. I also do not endorse in this dissertation any historical account for how Carnap himself understood the activity of engineering. Richardson (2013) has suggested, for example, that Carnap's talk of treating logic as an instrument or tool can be traced back to his interest in 118 Reck (2012) also emphasizes the point that, for Carnap, explication is a process. 76 3.6. Carnap as Conceptual Engineer nineteenth century German-speaking metrology, Instrumentenkunde. This may very well be the case but the engineering case studies I draw on are from, for the most part, the mid-twentieth century and I make no claim whatsoever that the conception of engineering design as practiced by twentieth century professional engineers is at all similar (or dissimilar) to nineteenth century conceptions of design engineering or the making of scientific instruments. My treatment of Carnap as conceptual engineer is purely an interpretive gloss on his technical work which is meant to better explain his philosophical projects for the mental consumption of contemporary philosophers of science. As we saw in the last chapter, Richard Creath has employed an engineering analogy to help explain why Carnap does not appeal to traditional philosophical notions of justification or intuition in his technical projects. More recently, he puts the point like this: philosophers can devise, refine, and explore a variety of conceptual or linguistic frameworks and test their suitability for various practical purposes. These frameworks are tools, so we do not have to prove that they are correct. Nor do we have to agree on which ones to use. We just have to be clear enough to see what follows from what. Then a new result, whether it is a newly clarified concept or a new theorem is a new and permanent and positive addition to our stock of tools. And Carnap can offer the preceding three decades and more in logic as an example of the sort of continuing progress that he is describing. Logicians often disagreed about which systems to use, but they almost never disagreed about what were the results of another's systems. (Creath, 2009, 211) In place of philosophical arguments we instead can show how a system of concepts can be defined within a conceptual framework. The debate between Quine and Carnap, from Carnap's point of view, is a debate of logical preference and fruitfulness: after each of them has shown how to define a notion of analyticity in their own separate frameworks they can then re-formulate their disagreement as a disagreement about whether they prefer differing logical consequences of their logical frameworks or how poorly one framework can be applied in the empirical sciences when compared to the other. As admirable as Creath's contribution is, however, it relies on a means-end notion of engineering: the job of the logician is merely to study an endless array of logical frameworks – to add to the current stock of tools – and it is up to the scientist to choose that framework from the current stock which seems to them most useful. However, as we will see in chapter 4, Carnap designs and constructs pure inductive logics with the aim of clarifying and systematizing the inductive concepts scientists already use. This means that there has to be some interplay, some 77 3.6. Carnap as Conceptual Engineer communication, between logicians and scientists. It is for this reason, however minor a point it seems to be, that the hierarchical view of engineering design provides us with, I argue, a more apt analogy of Carnap's mature method than a means-end conception of engineering. For André Carus, however, this point is not so minor. Indeed, Carus – who in turn draws on the work of Howard Stein – emphasizes that explications exhibit a certain 'dialectical' property or feed-back relation between ordinary and artificial, logical language. There is a give and take between the theoretical and practical. Here is how Stein puts the point in the context of choosing a linguistic framework appropriate for theoretical physics: It may very well be – I am inclined to think it is – that the possibilities to be contemplated in a framework for a theoretical physics as we know it today or as it is likely to develop have to be restricted by the general principles of the theory itself – principles that one would be loth to call 'analytic'. This is a serious modification of Carnap's view. It locates fundamental theory change in change of framework, and therefore outside the scope of the sort of inductive logic Carnap was trying to construct – which itself would, of course, be internal to a framework. That, it seems to me, entails a development of Carnap's views in a direction that I should characterize as 'dialectical'; for it entails a certain blurring of the distinction, dear to Carnap, between the purely cognitive, or theoretical, and the practical. (Stein, 1992, 291; emphasis mine) One way of reading Carnap's distinction between the practical and theoretical in works like LSL is that the separation between the practical and theoretical must be sharp: there are practical decisions we need to make concerning how to set-up a logical system and then there are theoretical questions we can formulate within that system. Alternatively, to adopt Carnap's language from his 1950 paper "Empiricism, Semantics and Ontology,"119 there are external, pragmatic, questions about the choice of a framework and theoretical questions expressible using a single framework: there is no mixture of the practical and theoretical relative to the same framework. Stein's point seems to be this: if no inductive logic, itself a part of a language framework, can fully characterize when theory change should occur – here understood as revisions to that framework – then such changes can only be made within a broader, more compressive framework: namely, the "framework" used by practiced physicists as a highly specialized combination of ordinary language and mathematics. But now practical questions about the choice of a language framework for a language of physics may in fact be influenced by the answers to theoretical questions formulated in a different framework, the framework used by physicists. Granted that this physical language framework will eventually influence which concepts physicists adopt when 119 See chapter 4, 112 ff., this dissertation. 78 3.6. Carnap as Conceptual Engineer talking about physics, we then have a 'dialectical' relationship between ordinary and artificial language.120 In contrast to Carus and Stein I would argue that this mixture of the practical and theoretical does not constitute a major change to Carnap's mature position. In fact, as we saw in section 2.2 from the last chapter, Carnap, in LSL, never attempted to formalize inductive processes as P-rules in a logical system and in later chapters we will encounter numerous examples for how the practical and theoretical can "mix" in the transition from pure to applied inductive logic. Indeed, even during the heyday of Carnap's work on inductive logic he was never in the business of defining confirmation functions over entire scientific theories, like Einstein's general theory of relativity.121 My hierarchical conception of engineering design can help illustrate this aspect of Carnapian logic of science. I suggest that we can make sense of how theoretical assertions, especially from sciences like theoretical statistics, can influence the decision to design an inductive logic in a particular way through the lens of hierarchical engineering. Lastly, I want to turn to my criticism of Hillier's notion of linguistic engineering from the last chapter. I argued against Hillier that Carnap does not, at least as a matter of principle, appeal to a notion of "fit" between the world and a linguistic model. But this raises an important question. In cases of engineering design it seems fairly obvious when engineering projects fail: planes can turn out to be slow and clumsy, budgets go from the black to red and bridges fall apart. Engineering success, it seems, is ultimately tied to empirical success. So if we take seriously the idea that Carnap's work on inductive logic can be fruitfully understood as an engineering activity, surely we need to measure the success of an inductive logic by its empirical success – probability theory, after all, is only as good as it makes successful empirical predictions (just ask insurance adjusters and casino managers). Here we can reformulate Hillier's claim as such: the success of a linguistic framework, understood as an engineering design, must ultimately be measured by some measure of empirical success. In chapter 5, I argue that this is not the case: for the case of how inductive logic can be applied to decision theory, Carnap uses rational decision theory as a kind of conceptual space by which inductive logics can be outfitted, so to 120 "The practical realm kicks backs. Ordinary language is still to be overcome and improved, but is also [. . . ] the medium of practical reflection, the medium within which we choose among theoretical frameworks" (Carus, 2007, 21). 121 See Carnap, 1962b, 243–244. 79 3.7. Conclusion speak, to conceptually test ideal agents in hypothetical empirical situations – no actual empirical tests need ever be carried out. For Carnap, the success of a language framework need not be tied to its empirical success. Instead, we have an example of how something like an operational principle is characterized methodologically – as the requirements an adequate inductive logic must satisfy – and this principle is then tested "conceptually," i.e., tested in the possible states of the universe according to a logical language. The result is a matter of finding an adequate enough inductive logic: it is a matter of satisficing.122 3.7 Conclusion In conclusion, the caricature of a philosophical method based on search – a method that seeks the correct answer to a philosophical question – outlined at the beginning of this chapter differs from philosophy as conceptual engineering in a way similar to how satisficing differs from optimization: the aim is to find a good enough solution relative to a practical set of criteria rather than guaranteeing that the "correct" solution will eventually be found. However the kind of conceptual questions Carnap is concerned with, like how to clarify analyticity or a logical concept of probability, are inherently vague: not only do such questions fail to distinguish between different explicanda but they tend to unclearly mix psychology and logic. How exactly such concepts should be clarified is, for Carnap, more of an expression of one's philosophical or scientific preferences rather than a search for some timeless truth. Carnap was not attempting to do the history of science or to do psychology: he only provided us with a method, a method which I suggest we can apply ourselves while incorporating norms and methods beyond that of Carnapian logic of science, including, perhaps, from the history of science or feminist critiques of science. It is a method one can use, for example, to detach oneself from foundational worries about the epistemology and metaphysics of a logical concept of probability and examine logically constructed inductive concepts within some inductive logic: this is not a method to once and for all settle foundational questions but instead it is a method to help systematize and clarify some of the possible ways of thinking about probability and induction. Philosophers, for Carnap, are not priests: it is not our job to tell scientists or the layman 122 This point is similar, I think, to the William C. Wimsatt's work on articulating a conception of philosophy fit for epistemically limited beings such as ourselves; see Wimsatt (2007). 80 3.7. Conclusion what they should or shouldn't do, what they should or shouldn't think or what they should or shouldn't value. Rather, our job is simply to pinpoint misunderstandings and to help facilitate useful and clear dialogue between interested parties. Conceptual engineering best captures, I suggest, the final evolution of Carnap's attitude of logical tolerance – an attitude which extends to any technical machinery: for Carnap, there is no a priori restriction on what conceptual resources are not modifiable – the very resources central to contemporary philosophy, like concepts of mental representation, propositional facts, semantic content and modal reasoning, are no more sacred for the Carnapian conceptual engineer than concepts of analyticity and logical probability. And I suppose it is in this sense that Carnap's mature philosophical attitude is more pluralistic, democratic and tolerant than those philosophical attitudes characterized by methods like conceptual analysis which would have us search until we found some truth about ourselves or the world.123 123 Compare, for example, Jeffrey's epigraph at the beginning of this chapter. For more on whether explications could be an alternative to conceptual analysis, especially in collaboration with the X -phi literature, see Justus (2012); Shepard and Justus (2014). Also see Kitcher (2010); Kuipers (2007) and, hot off the press, Dutilh Novaes and Reck (2015). 81 Chapter 4 Designing Inductive Logic It is conceivable that "you" could design a language so as to make Carnap's theory consistent with the one presented in the present work. All probability judgment would be pushed back into the construction of the language. Something like Carnap's theory would be required if an electronic reasoning machine is ever built. - I. J. Good, Probability and the Weighing of Evidence (1950) Carnap suggested that in epistemology we must, in effect, design a robot who will transform data into an accurate probabilified description of his environment. That is a telling image for modern ambitions; it also carries precisely the old presuppositions which we have found wanting. We can already design pretty good robots, so the project looks initially plausible. But the project's pretensions to universality and self-justification send us out of the feasible back into the logically impossible – back into the Illusions of Reason, to use the Kantian phrase. - Bas van Fraassen, "The False Hopes of Traditional Epistemology" (2000) When the mathematician and statistician I. J. Good says in 1950 that for a Carnapian inductive logic "all probability judgment would be pushed back into the construction of a language" he is attributing to Carnap what the contemporary philosopher of science Bas van Frasssen would latter call a robot epistemology: The probabilistic judgments made by human reasoners are to be logically reconstructed in terms of the logical probability values cranked out by an adequate confirmation function – a function which is, as a matter of logical stipulation, completely welldefined for any possible experience an epistemic agent could ever encounter. This account of Carnap's inductive logic isn't fanciful; Carnap did, after all, actively search for such a function. Nevertheless, I claim in this chapter that by the time Carnap wrote his monograph The Continuum of Inductive Methods in 1952, he guaranteed neither that such a fully adequate function exists nor, assuming it does, that we would ever find it.124 This situation is analogous to an hierarchical engineering design problem – a problem best approached by satisficing. Indeed, in his work on inductive logic Carnap only specifies the scaffolding, so to speak, of a 124 For further retrospective discussion of Carnap's inductive logic, see French (2015a); Jeffrey (1970; 1973; 1974; 1990; 1992a); Kuipers (1978). 82 4.1. Historical Background semantic system L with a fixed, but not fully specified, interpretation within which measure and confirmation functions can be defined using the semantic resources of L. Using this scaffolding, different inductive logics can be constructed and the requirements and restrictions placed on the semantics of L altered to satisfy certain practical needs. The construction of an inductive logic intended to explicate inductive reasoning is, for Carnap, as indelibly a theoretical as it is a practical matter.125 The basic trajectory of this chapter is as follows. After a brief historical discussion of his work on inductive logic, I introduce the terminology and technical issues required to explain how Carnap constructs a single quantitative confirmation function called c∗. I then go on to recap how Carnap distinguishes "pure" from an "applied" inductive logic in a way analogous to the distinction between mathematical and physical geometry. Afterwards I concentrate on Carnap's construction of a pure inductive logic and explain how he sees the role of inductive logic in possibly clarifying the problem of estimation, a problem which concerns the very foundations of theoretical statistics. In particular, I discuss how Carnap investigates a particular continuum of inductive methods by constructing a parameterization of confirmation (and estimation) functions called the λ-system. I then discuss how Carnap attempted to use this λ-system to locate "optimal" estimation functions which are nonetheless biased (in the statistical sense of the term). I then argue that Carnap's attempt to find "optimal" estimation functions using the λ-system can be understood as a kind of engineering problem: it is an example of how an "ill-structured" problem can be transformed into a "well-structured" problem and this transformation process is better understood, I suggest, in terms of satisficing rather than searching for the truth. 4.1 Historical Background It will be useful to split Carnap's work on inductive logic into four periods. The first period coincides with the last half of his time at the University of Chicago from roughly 1941, when he first becomes interested in problems about probability and induction, until he leaves Chicago in 1952 to take up a visiting fellowship at the Institute for Advanced Study (IAS) at Princeton 125 That Carnap embraces a kind of pragmatism or voluntarism with his λ-system (explained below), however, is not lost on van Fraassen; see van Fraassen (1989, 176) and, for his own views, van Fraassen (1984). 83 4.1. Historical Background University.126 It is during this time, from 1940 to 1941, that Carnap visited Harvard and, from 1942 to 1944, used a Rockefeller grant to temporarily relocate to Santa Fe, New Mexico where he could work without distraction on pure semantics, modal logic and inductive logic.127 This was a productive time for Carnap: in the 1940s he not only published three books on semantics (Carnap 1942; 1943; 1956, first published in 1947) and numerous papers on semantics, modal logic and inductive logic from 1945 to 1947, he also published what is arguably one of his most well-known and anthologized papers, "Empiricism, Semantics and Ontology." Finally, he also published both his probability book Logical Foundations of Probability (LFP) in 1950 and, in 1952, his monograph The Continuum of Inductive Methods (CIM ). Although LFP was originally planned as a two-volume book called "Probability and Induction,"128 Carnap never managed to complete the second volume; nevertheless, a majority of the planned content for volume two ended up in Carnap's 1952 monograph, CIM. By and large, the technical developments I discuss in this chapter come from both LFP and CIM.129 What is also of note during this initial time period is that Carnap, prompted by criticism of his two papers on inductive logic in 1945,130 engaged in serious philosophical discussions about probability and inductive logic with his peers. For example, Carnap had extended discussions with Carl Hempel and Nelson Goodman in 1946 sparked by their initial conversations at the annual meeting of the American Association for the Advancement of Science (AAAS ) held in St. Louis.131 Also of relevance to the historical context for this chapter is that, while at Chicago in the late 1940s, Carnap influenced a new generation of philosophers of science in North America to work on probability and inductive logic. There he taught inductive logic from his apartment in 1948 to a group of students, including Abner 126 Carnap moved to the University of Chicago from Prague in the fall of 1935; for more details see Creath (1990a). 127 See Carnap, 1963a, 34–5, 41–2. For more on Carnap's visit at Harvard, see Frost-Arnold (2013). 128 LFP, vii. All references to LFP, including this one, are to the second, 1962, edition of the book. 129 It is important to point out that despite the fact that LFP is published before CIM, one shouldn't necessarily treat CIM as a modification of Carnap's views in LFP – this is because Carnap had a draft of CIM already in the summer of 1949; Carnap to Quine, Nov. 26, 1950; reproduced in Creath (1990a, 420-422). Apparently, Carnap had also worked on the idea of the lambda parameter as early as 1947 (Carnap, 1980, 93). 130 Carnap (1945a,b). 131 Carnap (1945b) is part of a three-volume symposium on probability and induction in Philosophy of Phenomenological Research (Vol. 5 (4) Jun. 1945; Vol. 6 ( 4) Sept. 1945; and Vol. 6 (4) Jun. 1946). The other contributors are Hans Reichenbach, Henry Margenau, Gustav Bergmann, Felix Kaufmann, Richard von Mises, Ernest Nagel, and Donald Williams. There is extensive correspondence resulting from the meeting between Carnap, Goodman and Hempel in St. Louis from 1946 to 1947 and the publication of Goodman (1946) at Carnap's archives at Pittsburgh; box 084, folders 14 and 19. 84 4.1. Historical Background Shimony, Howard Stein, John W. Lenz and Richard C. Jeffrey.132 And as we will see in the next chapter, Carnap's influence led to some important results in inductive logic and rational decision theory (most notably, results by John G. Kemeny, Shimony and Jeffrey). The second time period is from 1952 to 1954 when Carnap was a visiting scholar at Princeton's IAS.133 The third time period is from 1954 until 1962 which spans from the moment he moved to UCLA until both his publication of "The Aim of Inductive Logic" in 1962 – the published version of the talk which he was personally invited by Patrick Suppes to give at the 1960 International Congress for Logic, Methodology and Philosophy of Science134 – and his retirement from academia.135 The fourth and last time period spans rest of Carnap's life until his death in 1970 during which he routinely works on and amends his manuscript "A Basic System of Inductive Logic" – this manuscript is subsequently posthumously published in two parts, each in one of the two volumes of the periodical Carnap co-edited and planned with Jeffrey, Studies in Inductive Logic and Probability.136 In this chapter we will focus on the first two time periods, between 1941 and 1954 (the next chapter focuses on the third time period).137 In particular, we will be interested in how Carnap's work on pure inductive logic is informed by how it could be applied to help clarify the foundations of the empirical sciences. For example, Carnap spends part of his time at Princeton's IAS working on a semantic concept of information based on concepts from an inductive logic. However, much of this work was already completed from 1949 to 1951 and although he is invited to participate in a Cybernetics conference at Princeton in 1952, Carnap, for the most part, leaves it to Yehoshua Bar-Hillel (who previously held a research position at Chicago in 1950) to disseminate this work to information scientists.138 Instead, Carnap spends the ma132 See Shimony (1992). 133 Apparently, Carnap's time at IAS was not better than his time at Chicago; as Carnap would later tell Quine in 1955, these two years were "somewhat difficult years for me" (Carnap to Quine, Sept. 22, 1955; In Creath 1990a, p. 440-1). Also of historical interest is a letter by Carnap's second wife, Ina, to John Kemeny in which she discusses her and Carnap's disillusionment with Chicago, their inability to stay at Princeton and the uncertainty of moving to UCLA (Ina to Kemenys, Feb. 28 1954, RC 083-18-12). 134 April 25, 1960, Ina to Kemeny, RC 083-15-03. 135 Carnap occupied the same chair at UCLA previously held by Reichenbach who had died from a heart attack in 1953. 136 See Jeffrey (1980); Jeffrey and Carnap (1971). 137 Although his later work is interesting and is in need of examination, I say relatively little about Carnap's final work on inductive logic in this dissertation; although see Skyrms (2012) and Zabell (2005). 138 Carnap tells Kemeny in 1952 that he intended this definition "(in January 1949) as an analogue to the statistical concept "amount of information" (see e.g. Wiener's Cybernetics, pp. 75 ff), replacing statistical probability by inductive probability" (Carnap is referring to Wiener, 1948). However, in the same letter, 85 4.1. Historical Background jority of his time in 1952 working out the mathematical details of his work on inductive logic with the mathematician John G. Kemeny.139 In particular they worked on (i) simplifications of Carnap's continuum of inductive methods, the λ-system, (ii) less idealized inductive logics based on Kemeny's work on semantics and set-theoretic models and (iii) extensions of Carnap's work on inductive logic to analogical reasoning, including what Carnap and Kemeny called the "two"and "many"-family problems.140 Although there is no attempt in this dissertation to discuss these results, it would be difficult to downplay the importance of Kemeny's mathematical contributions to Carnap's work on inductive logic. In 1959, for example, Carnap tells Kemeny that, Our meeting in Princeton was pretty much a miracle and revelation to me. In addition, it came just at a time when I had need and use for miracles! (Carnap to Kemeny, May 5, 1959, RC 083-15-12) Tellingly, in the same letter Carnap admits that of all his peers working on probability and induction only Kemeny understood more mathematics than himself. Indeed, it is this collaborative work with Kemeny on inductive logic in 1952 that informs the vast majority of Carnap's later work on inductive logic, including Carnap's adoption of mathematical measure theory to define measure and confirmation functions in the "Basic System" manuscript. Nevertheless, in the 1950s, unless you were lucky enough to be included within a tight-knit community working on inductive logic – a community which included Jeffrey, Hempel, Kemeny and Feigl – you would have been unlikely to have had access to Carnap and Kemeny's most recent technical results in inductive logic. A solution to their two-family problem, for example, Carnap says that he "found no time" to work out the theory so instead Carnap "dictated [his] notes on six half-hour wire-spools and sent them to Bar-Hillel" (April 29, 1952; RC 083-18-20); the result is Carnap and Bar-Hillel (1952). Bar-Hillel was actively engaged for a while in presenting Carnap's work to a Cybernetics group at MIT (Bar-Hillel to Carnap, March 15, 1952; RC 102-02-102). Apparently, the reason that Carnap couldn't attend the Cybernetics conference at Princeton was because of problems with his back (Carnap to L. J. Savage, April 11, 1953; RC 084-52-22). Presumably, Carnap is referring to the 10th, and last, Macy's Conference held at Princeton in 1953; Bar-Hillel, however, did give a presentation on a semantic concept of information at this conference (Bar-Hillel, 1964, 11); for more on the Macy's conferences and the history of cybernetics, see Heims (1991). 139 Interestingly, before Kemeny and Carnap first meet at Princeton, through correspondence Hempel introduces Kemeny's work to Carnap and Carnap realizes that Kemeny's "index of caution" is actually the same as his λ parameter. (see Carnap to Kemeny, December 3, 1951, RC 083-18-30; Kemeny to Carnap, Dec. 10, 1951, RC 083-18-27). 140 Supposing the λ-system helps us represent the "one-family" problem of specifying the probability that the next ball pulled from an urn is blue, the two-family problem concerns how the λ-system should be modified to calculate probabilities concerning two modalities, or "families", like color and whether the ball is translucent or opaque. The "many-family" problem, then, concerns how the λ-system can be generalized to n-many families. Kemeny worked with Carnap to figure out how to quantify an inference by similarity or analogy in order to calculate these probabilities. 86 4.1. Historical Background was only first published, in German, in section eight of the B appendix to Carnap and Stegmüller (1959) and, in English, in Carnap's Schilpp volume.141 Moreover, the "Basic System" manuscript was nearly ten years in the making as Carnap first started sending out sections of the manuscript as he wrote them to a few select peers in 1959. Indeed, on one of the few occasions that Carnap actually discussed his recent technical work in person – a two-day workshop organized by Hempel at Princeton in 1965 made to coincide with Carnap's journey from Germany back to California – only a very limited number of scholars and graduate students were invited.142,143 But even at this meeting, Carnap was only interested in technical improvements to his recent work in inductive logic: as Hempel reports in a letter from 1965 to the participants, "Carnap told me that he would not want to discuss broader philosophical questions concerning inductive logic, but certain technical problems related to his more recent axiomatic work in this field."144 But before Carnap became solely focused on technical questions about pure inductive logic he was also concerned with showing how his work on pure inductive logic could be applied to the empirical sciences. For example, while Kemeny spent the academic year 1953-1954 in England (and before Kemeny was hired away by Dartmouth's mathematics department in 1954), Carnap spent the majority of his time figuring out how his work on inductive logic could be used to construct an adequate explication of a semantic concept of entropy.145 Carnap finished a two-part manuscript on entropy in January 1954.146 Part one – instead of using the semantic 141 Carnap apparently had completed most of the "Replies" for that volume as early as 1958 but due to a variety of reasons the volume was ultimately delayed until 1963, one year after Carnap's retirement from UCLA. Actually Carnap is "officially" retired from UCLA in 1958 at the age of 67 (Carnap was born in 1891), but is reappointed after that on a year-by-year basis (Creath 1990a, 445). 142 Sadly, aside from issues with Carnap's health, there is another reason for the decrease in his academic output. Carnap's wife Ina, who had been suffering from depression for some time, committed suicide on May 26, 1964 (Creath, 1990a, 39). Afterwards, Carnap went to Germany to visit his daughter from a previous marriage and his grandchildren. It is a testament to the friendship between Jeffrey and Carnap that Carnap arranged for a telegraph to be sent to Jeffrey informing him of what happened the day after Ina's death (RCJ box 9, folder 9 ); indeed, the correspondence between Carnap and Jeffrey (including their wives) is quite regular from June 1957 until Carnap's death in 1970. 143 The full list of those invited is: Peter Achinstein, Paul Benacerraf, Herbert Bohnert, Herbert Feigl, Richard Jeffrey, David Kaplan, John Kemeny, Henry E. Kyburg, Hughes Leblanc, Richard M. Martin, Sidney Morgenbesser, Ernest Nagel, Robert Nozick, Hilary Putnam, Wesley C. Salmon, L. J. Savage, Abner Shimony and Wolfgang Stegmüller. 144 June 24, 1965; Hempel's Archives at Pittsburgh ASP. 145 According to Bar-Hillel (1964), Carnap first started worrying about entropy at while Princeton in 1952 after Carnap and Bar-Hillel discussed conceptual problems with John von Neumann's AAAS talk in St. Louis for which they were in attendance. In the talk von Neumann, according to Bar-Hillel, had apparently suggested "a triple identity between logic, information theory and thermodynamics" (Bar-Hillel, 1964, 11-12). For an extended discussion of these issues, including the differences between Carnap, Pauli and von Neumann's views on entropy and information, see Köhler (2001). 146 The manuscript consisted of two parts: Part I as "A Critical Examination of the Concept of Entropy in 87 4.1. Historical Background notions of state-description and range – started out with the notions of the description of a micro-physical state, D, and the number, z(D), of descriptions "similar" to D, which are then used to define a concept of degree of order, a concept of disorder and then finally several different versions of a concept of entropy, S. He then used these concepts to characterize the different concepts of entropy introduced by two physicists, Boltzmann and Gibbs, in order to articulate why he thought those concepts found in physics textbooks to be unsatisfactory. Part two of the entropy manuscript is more theoretical: basically, Carnap generalizes Boltzmann's concept of entropy to talk about how a density function can be defined for the "volumes" of abstract "environments" in order to define an abstract concept of entropy, S∗∗. However, what is most interesting for us, as we will discuss in the last section of this chapter, is that Carnap then goes on to use this concept to define concepts of degree of order and disorder, from which particular measure and confirmation functions, m∗∗ and c∗∗, can then be defined (see Figure 4.6 on page 127). We already know from Carnap's autobiography that he discussed the first part of the entropy manuscript with physicists at Princeton – in particular, with Wolfgang Pauli, Leon van Hove and John von Neumann. Apparently, however, this meeting did not go too well. Although all three disagreed with Carnap, Carnap was frustrated that together their criticisms were not consistent. Nevertheless, in a letter to Kemeny, Carnap suggested that: my criticism is perhaps not valid with respect to what physicists actually do, in distinction to what they write in the books. I still believe that many of the customary formulations are quite questionable; but this fact in itself would not make my lengthy discussions worthwhile. (Carnap to Kemeny, May 29, 1954, RC 083-18-14) Carnap, however, didn't give up on the entropy manuscript; he sent copies of it to Abner Shimony in 1955 and Howard Stein in 1957, asking both of them for their advice.147 Interestingly, Carnap not only tells Stein that he "had to go back to studying statistical mechanics more closely than I had done in the time of my studying physics way back," but also that: Since the physicists did not understand my logical language, and since I was not completely sure of my physics, the ms. was laid to rest. (Carnap to Stein, August 29, 1957; RC 090-13-24) Classical Physics" and Part II as "An Abstract Concept of Entropy & its Use in Inductive Logic" (Jan. 22, 1954 Carnap to Kemeny; RC 083-18-13, pg. 2-3). 147 Shimony to Carnap, July 9, 1955; RC 084-56-01 and Carnap to Howard Stein, August 29, 1957; RC 09013-24. 88 4.1. Historical Background The reception of his work on entropy was not as successful as he would have liked and so Carnap, when he wasn't working on his autobiography and replies for his Schilpp volume, returned to working on inductive logic after relocating to UCLA.148 However, Carnap did not give up on the idea of applying his work on inductive logic to the empirical sciences. But now instead of theoretical physics, Carnap became interested in applying his work on inductive logic to rational and empirical decision theory. Indeed, 1955 was a good year for the field of inductive logic as it saw the publication of results which showed how to define a notion of rationality in terms of the "consistency" or "coherency" of beliefs (relative to a betting system) and when one's degrees of belief must obey the probability axioms.149 Basically, as Carnap understood the situation after 1955, it was Ramsey and de Finetti who had shown, in Ramsey (1926) and de Finetti (1937), respectively, that a belief function is coherent if it satisfies certain axioms of the probability calculus.150 More importantly, however, it was Kemeny who showed the reverse (although Kemeny and Carnap would later learn that de Finetti had shown this much earlier): if the belief function satisfies the probability axioms, that function is coherent. Details aside, these results made up the theoretical backbone of subjective Bayesianism and "probabilism," or roughly the idea that rational degrees of beliefs should be cashed out in terms of probabilities. And this was all going on simultaneously with the rise of Bayesian approaches to probability and statistical inference prompted, for example, by the publication of Savage (1954). But this is a story best left for chapter 5.151 The primary reason for providing this short history before discussing Carnap's inductive logic is to illustrate that Carnap's work on inductive logic is not a single technical project; rather, it is a research program: based on the discovery of an adequate inductive logic, an 148 While at UCLA, besides Haim Gaifman who was around in the late 1950s, Gordon Matthews and John L. Kuhns worked with Carnap as research assistants from 1955 to 1962 (Carnap to York, April 28, 1965; RC 082-23-07). Most interesting is that Carnap worked with Matthews and Kuhns on a computer program to calculate different confirmation functions, for different values of λ, and the printed outputs of this program can be found at Carnap's archive in Pittsburgh. 149 See Kemeny (1955), Shimony (1955) and Lehman (1955). 150 de Finetti's paper is translated into English in Kyburg and Smolker (1964). 151 Understanding the relationship between "necessitarians" like Carnap and "subjectivists" like Savage or de Finetti is a complicated question, not only from a contemporary point of view, but for Carnap and his peers as well (e.g. Carnap tries to suggest that there really isn't much difference between the views of de Finetti and himself in a letter to de Finetti, July 30, 1963, RC 084-16-01; also see de Finetti to Carnap, October 27, 1961; RC 084-16-02). For example, in 1952, Savage even points out to Carnap the similarity between Carnap's λ-system and work by Bruno de Finetti on "equivalent" (i.e., what de Finetti later calls "exchangeable") events (L. J. Savage to Carnap; Feb. 24, 1952; RC 084-52-25). For a more precise treatment of these similarities, see Zabell (2005) and Good (1965). 89 4.2. Carnap's Confirmation Function c∗ entire conceptual edifice is to be constructed, an edifice conceptually tied to the foundations of information theory, statistics and physics.152 I argue that such a research program should be properly seen as a kind of conceptual engineering: it is the construction of an inter-connected and hierarchical logical system which can be redesigned and interpreted to be of use to scientists, especially a new breed of social scientists studying decision-making under uncertainty. Before we can begin to understand the inner structure of this program – a journey that will take us the entirety of the rest of the dissertation – we have to start, so to speak, from the ground up: with semantic measure and confirmation functions. 4.2 Carnap's Confirmation Function c∗ Although Carnap delayed a more detailed discussion about c∗ until the second, never completed, volume to LFP he explains the definition of this function in both the appendix to LFP and in Carnap (1945a). It is there that Carnap claims that c∗ is an especially good candidate to explicate the logical concept of probability. Nevertheless, Carnap is also quite clear that c∗, even if it is an adequate explicatum, may not be the only such explicatum (LFP 563). The reason why Carnap would think it an adequate confirmation function is that it has an interesting logical property: the definition for this function characterizes a single function – for all sentences h, e in the logical system on which the inductive logic is based c∗(h, e) always has a unique and welldefined quantitative probability value.153 Conversely, most of the time when Carnap defines a measure or confirmation function the definition picks out a class of functions, a class which can then be made smaller by imposing more strict restrictions and requirements on the definition of a measure or confirmation function. The task of an adequate inductive logic, then, is to figure out what kind of requirements and constraints to impose on measure and confirmation functions so that we end up characterizing a single function. We next discuss how Carnap constructs c∗ using the semantic resources of a logical system. First we have to discuss the logical system itself. When it comes to a quantitative inductive logic, Carnap defines confirmation and measure functions as semantic functions (or really, "functors") 152 For a very different vision of Carnap's research program, but as a research program nonetheless, see Lakatos (1968). 153 Provided that e is not logically false in order to avoid division by zero. 90 4.2. Carnap's Confirmation Function c∗ defined in the metalanguage (typically English plus a few Fraktur symbols) which gives numerical values to pairs of sentences in the object language (54).154 Carnap defines the object language, L, as including the following N + 1 many logical systems (see LFP 55-60): • The infinite system L∞; viz. a first-order logic with identity and individual variables which contains both (i) an infinite sequence of individual constants, 'a1', 'a2', ..., and (ii) a finite number of primitive predicates of any degree (represented by capital letters, 'P1') designating properties.155 • For all positive integers N , the finite systems LN ; viz. those logical systems with the same finite predicates as L∞ but only containing the first N individual constants from L∞. Crucially, L by itself is just a logical calculus: the named individual constants and the finite number of predicates and relations are so far left uninterpreted. For Carnap, however, an inductive logic is built, so to speak, on the back of a semantic interpretation for this calculus. Although the technical details can be found in Carnap (1939; 1942), the basic idea is that, in the metalanguage, a recursive definition of 'true in L' is defined over the primitive terms of L (see LFP §17).156 However, Carnap does not demand that we provide a complete semantic interpretation for L at the outset; instead, only what he calls a "skeleton" of an interpretation is to be given initially (59). A complete inductive logic can then be constructed by filling in the details of this interpretation. For example, Carnap first assumes that whichever interpretation of L we adopt, it must satisfy what he calls the requirements of independence and completeness (which we will discuss below). Second, technically speaking, the definitions of the measure and confirmation functions, just like the semantic rules of truth, are to be explicitly defined in the semantics of L.157 Lastly, the interpretation will specify what the individual constants and properties of L 154 See chapter VII of LFP for Carnap's work on a comparative inductive logic. 155 For the quantifiers, I use the symbols, '∀', '∃', and for the connectives, '¬', '∧', '∨', '→'. Note that I use the symbols '¬' and '∧' instead of '∼' and ''. 156 As an example of an interpretation of primitive axiomatic terms, Carnap mentions, for example, Reichenbach's Zuordnungsdefinitionen (LFP 16). Moreover, there can of course be several interpretations for the same axiomatic system: for Peano's axioms, for example, Carnap remarks that "[t]here is an infinite number of true interpretations for this system, that is, of sets of entities fulfilling the axioms, or, as one usually says, of models for the system" (LFP 17). Also see Nagel, 1939, 38-43. 157 Interestingly, Carnap points out that we could follow Keynes and Jeffreys in (implicitly) representing probability functions in terms of an operator in an intensional modal logic (LFP 280-1). 91 4.2. Carnap's Confirmation Function c∗ will represent, e.g, space-time points and physical properties of objects, or perhaps organisms in a population and the individual fitness values for these individuals. Whatever the case, however, it is important to clarify that Carnap distinguishes practical questions about the construction of L and a semantics for L from methodological questions about which interpretation will be most useful for empirical investigations (see LFP §44).158 Before we can discuss these requirements of independence and completeness that the interpretation of L must satisfy, we need to introduce a bit of Carnap's technical terminology; viz. the semantic concept of a state-description, a concept meant to explicate the notion of "possible cases or states-of-affairs" (LFP 71). Roughly speaking, the atomic sentences of L belong to the smallest set of sentences formed, for every predicate Pn in L of degree n, by applying Pn to any of the n many individual constants in L. A state description of L, then, is simply a sentence formed by the conjunction of all atomic sentences such that each conjunct may or may not be prefixed with a negation sign (71-2; see D18-1).159 The set of all state descriptions in L, according to Carnap, then describes all the possible cases the "universe" could be in; relative, of course, to those atomic sentences in L representing the "basic events" of that "universe." The requirements of independence and completeness ensure that this is the case. The requirement of independence concerns the interpretation of the non-logical signs of L: simply speaking, this requirement states that all atomic sentences are pair-wise logically independent (72).160 The second requirement, the requirement of completeness, states that primitive predicates of L are "sufficient for expressing every qualitative attribute of the individuals in the universe of L, that is, every respect in which two positions in this universe may be found by observation to differ qualitatively" (74); or as Carnap alternatively expresses this requirement: if a system L is given and a universe, real or imaginary, is to be chosen as an illustration or model for L for the purposes of inductive logic, then this universe must be neither richer 158 For example, Carnap has an extended conversation about the construction of a deductive system L′ whose constants are interpreted as temporal series of events. However, although such a system is far less idealized than the logic Carnap actually constructs, the drawback of such a language is that it is too complex to be of much use (at least when we are forced to work out the computational details for such a logic by hand) (LFP 62-5). 159 Although Carnap, at least in the 1950s, cashes out "possibility"-talk in terms of the sentences in a logical language he reports that one could instead talk about propositions "provided it is done in a cautious way, that is to say, in a way which carefully abstains from any reification or hypostatization of propositions [...]" (LFP 71). 160 More specifically, it includes two clauses: first, that the individual constants in L "designate different and separate individuals" and, second, that the primitive predicate of L likewise represent pair-wise independent relations and properties (LFP 73). 92 4.2. Carnap's Confirmation Function c∗ nor poorer in qualitative attributes than L indicates. (LFP 75) These two requirements are not metaphysical assumptions. They are instead empiricallyinformed, methodological, restrictions the logician places on any adequate interpretation suitable for L.161 With these restrictions on the interpretation of L in place, we can now see how Carnap defines the semantic concepts of measure and confirmation functions. However, we first need to introduce another important semantic concept: the concept of the range of a sentence. For any sentence i in L, the range of i, call it R(i), is the class of state descriptions such that i "holds in" those state descriptions.162 It is with this semantic concept of the range of a sentence that Carnap, for example, defines the semantic L-concepts central to deductive logic, viz. the semantic concepts of L-truth and L-entailment which are understood by Carnap to be explications of analytical or logical truth and logical entailment, respectively (83).163 Likewise, the semantic concept of the range of a sentence in L is used to define the semantic concept of a measure function. Specifically, Carnap defines m as a regular measure function over all the state descriptions in LN as those functions that satisfy the following two conditions: (i) for any state description ki in LN , the value of m(ki) is a positive real number and (ii) if γ indexes all state descriptions in LN , the sum of all m(kγ) is equal to one (D55-1; LFP 295).164 Carnap then extends this definition to define regular measure functions defined over the sentences in LN using the following two definitions: (iii) for any logically false sentence h in LN , m(h) is by definition equal to zero and for any non-logically false sentence h in LN , the value m(h) is by definition equal to the quantity ∑ γ m(kβ), where β indexes the state descriptions in the range of h, R(h) (D55-2; LFP 295). Finally, for any functions m and c defined over the sentences and pairs of sentences of LN , respectively, we say that c is based on m if the following holds: for any 161 Carnap will relax these restrictions later in the early 1950s; see Carnap (1951; 1952) and Kemeny (1953; 1956a;b). Carnap admits he always felt uneasy about the completeness requirement in Carnap (1963b). 162 The basic idea is that the range of the sentence i are all those state descriptions consistent with the truth of i; "holds in" is defined recursively but we should not read "hold in" as being synonymous with true in some set-theoretic model; see D18-4 in LFP, 78-9, for the details. 163 See §19-20 of LFP for the details; roughly speaking, the sentence i in L is L-true if it holds in all statedescriptions (83). Likewise, for the sentences i, j in L, i L-implies j just in case R(i) ⊆ R(j) (83). 164 Note that, for the sake of readability and continuity between the notations in LFP and CIM, I will not always acknowledge explicitly the distinction between the metalanguage and L, e.g., instead of k Carnap uses the Fraktur symbol S to designate a definition in the metalanguage. The interested reader is invited to consult LFP for the technical details. 93 4.2. Carnap's Confirmation Function c∗ sentences h and e in LN , if m(e) = 0 then c(h, e) is not defined and otherwise, (D55-3, 295) c(h, e) = m(e ∧ h) m(e) . (4.1) Finally, c is a regular confirmation function if it is based on a regular measure function (D55-4). In order to finish our construction of c∗ we next need to introduce two more semantic notions: symmetrical measure functions and structure-descriptions. Carnap motivates the definition of a symmetrical measure function with an analogy to deductive logic: "we require that logic," says Carnap, "should not discriminate between the individuals but treat them all on a par; although we know that individuals are not alike, they ought to be given equal rights before the tribunal of logic" (485). He captures this idea of the "non-discrimination" of individuals in terms of the concept of isomorphic state descriptions.165 A symmetrical measure function m is defined as a regular measure function which assigns the same value to isomorphic state descriptions.166 A symmetrical confirmation function is then simply defined as a function based on a symmetrical measure function. Next, paraphrasing Carnap's technical definition, the structure-description corresponding to a state description ki in LN is the disjunction of all state descriptions isomorphic to ki (116). Then a structure description K in LN is simply defined such that there is a state description ki in LN for which K is the structure-description corresponding to ki (116). The measure function m∗ is then defined as that function fulfilling both the condition that m∗ is symmetrical and the condition that m∗ gives the same numerical value to all structure descriptions in L (LFP 563). The function c∗ is then that confirmation function based on m∗. What is so nice about c∗ is that the unique numerical values of c∗(h, e) for any sentences h, e (where e is not logically false) can be directly calculated using a number of logical theorems. Nevertheless, it took a lot of work to get here. Not only did we have to make choices about the logical syntax of L, we also had to make assumptions about the interpretation of L and then place further restrictions on our definition of measure functions, viz. that it is regular, symmetric and assigns the same value to structure descriptions, before we could define an adequate confirmation function. These are all practical choices: we could have chosen to use 165 Simply put, two state descriptions ki and kj are isomorphic if there is a one-to-one relation R whose domain and image is the set of all state-descriptions in LN and ki equals kRj , where R is applied to all the individual constants in the sentence kj ; for a more explicit definition see (LFP, §26) and (Carnap 1945a, 79-80). 166 See §90, especially D90-1,2. 94 4.2. Carnap's Confirmation Function c∗ different definitions for measure and confirmation functions (i.e., by using different semantic resources of L) or by placing different restrictions on the interpretation of L. Moreover, these choices are not merely choices about "linguistic frameworks," e.g., between whether to choose the logical syntax and semantics of a system like L against the possibility of dealing with, perhaps, higher-order logical systems. Instead, the inductive logician has to make very specific decisions about how measure and confirmation should be assigned their quantitative values, like whether a restricted principle of indifference should be adopted to assign probability values to the sentences of L. Moreover, the making of such decisions is in no way epistemological or metaphysical. As far as Carnap is concerned, he is just constructing a logical system L and suggesting possible semantic interpretations for this system. It is a purely logical activity. Now, according to Carnap, questions about which interpretations are useful or how to apply such a system to tackle some empirical problem using inductive reasoning, are, indeed, methodological questions. It will be useful at this point to distinguish, to use Carnap's terminology, between two "problems" for inductive logic, i.e., "pure" and "applied" inductive logic. The relationship between pure and applied inductive logic, Carnap points out, "is somewhat similar to that between pure (mathematical) and empirical (physical) geometry" (1971b, 69). For the case of mathematical geometry, according to Carnap, "we speak abstractly about certain numerical magnitudes of geometrical entities" and then prove theorems about those entities (69). However, according to Carnap, no "procedure of measuring these magnitudes" is provided; instead, such questions belong to physical geometry the task of which is "to lay down rules for various procedure of measuring length, rules based partly on experience and partly on conventions" (69). Likewise for pure and applied inductive logic. In pure inductive logic, all we do is provide a logical system without an interpretation of the non-logical constants with rules for defining measure and confirmation functions. Applied inductive logic, on the other hand, is concerned with providing an interpretation of this logical system, i.e., we provide rules for interpreting the individual constants and primitive predicates of the logical system as, for example, a system of gas particles for which relations like density can be defined over collections of individual gas particles in this system. Moreover, we may also wish to interpret the measure and confirmation functions themselves. Indeed, in the next chapter, we will discuss in detail how Carnap gave 95 4.2. Carnap's Confirmation Function c∗ what he sometimes calls a "quasi-psychological" interpretation to measure and confirmation functions as credibility and credence functions (Carnap, 1962a, 303; 1971b). As Carnap later puts the point, In applied IL, the theorems [from pure inductive logic, or IL CFF] are used for practical purposes, e.g., for the determination of the credibility of a hypothesis under consideration in a given knowledge situation, or for the choice of a rational decision. Justifying an inductive method and, or specifically, offering reasons for the acceptance of a proposed axiom, is a kind of reasoning that lies outside pure IL and takes into consideration the application of C -functions. What is relevant in this context is not merely the consideration of actual situations, but rather that of all possible situations. (1971b, 105). Notice here the distinction between providing reasons or justifying an applied but not a pure inductive logic. Indeed, Carnap suggests the case is similar to deductive logic.167 The analogy is this: on the one hand, there are the problems involving inductive logic and "methodological problems and, more specifically, problems of the methodology of induction" and on the other hand, there is a pair of problems involving deductive logic qua a field of pure mathematics and the activity of carrying out the "procedures of deductive logic and mathematics" (LFP 2023). More explicitly, methodological rules include not only useful rules of thumb or hints for using an interpreted logic, including theories of approximation and the like (203), but also rules laying down requirements for an adequate interpretation. The requirements of completeness and independence, for example, are such methodological rules (73). This also includes rules detailing how inductive logic may be used; for example, Carnap's principle of total evidence, i.e., that "the total evidence must be taken as a basis for determining the degree of confirmation," is itself a methodological rule (see §45B, especially p. 211). Moreover, in light of this distinction between the problems of a pure and applied inductive logic, Carnap is well aware of the fact that what he calls the "application" of logic, including inductive logic, to the activities of scientists "involves a certain simplification and schematization of inductive procedures" (209). More specifically, Carnap says that the application of inductive logic involves what he calls an "abstraction"; namely, that we abstract away from the actual vague or inexact concepts found in scientific practice and instead assume that "we deal only with clear-cut entities without vagueness" (209). Carnap's language here is reminiscent of Herbert Simon's distinction between "ill-structured" and "well-structured" problems we encountered in 167 Carnap employs a similar analogy between logical syntax and geometry in section 25 of LSL. 96 4.2. Carnap's Confirmation Function c∗ the last chapter. Just as "well-structured" problems frequently must impose some kind of extra structure on the original problem, Carnap is also cognizant that such abstraction comes at a price: In any construction of a system of logic or, in other words, of a language system with exact rules, something is sacrificed, is not grasped, because of the abstraction or schematization involved. (LFP 210 ). But Carnap is not arguing that there is some quantity of the physical world that cannot be faithfully captured by logical abstraction; "it is not true," continues Carnap, that there is anything that cannot be grasped by a language system and hence escapes logic. For any single fact in the world, a language system can be constructed which is capable of representing that fact while others are not covered. (LFP 210) Instead, the main restriction on the method of logical abstraction, according to Carnap, is that no single logical system can ever be expected to capture faithfully all facets of the world.168 However, logical abstraction can only get us so far; after all, the point of logic, Carnap tells us, is not merely to clearly express facts but rather to help inform practical decisions:169 The final aim of the whole enterprise of logic as of any other cognitive endeavor is to supply methods for guiding our decisions in practical situations. (LFP 217) Nevertheless, as the theory of inductive logic itself is in its infancy, Carnap argues we must start with an inductive logic based on simple languages, languages which can provide the basic scaffolding for later generations of logicians and philosophers to construct more complicated and realistic inductive logics that can then be more fruitfully applied to actual scientific problems (213-5). That inductive logic can be so schematized illustrates the conceptual importance of inductive logic as an instrument for informing practical decisions: whether it be for a farmer, insurance agent, engineer or physicist "[t]he decisive point," says Carnap, "is that just for these practical applications the method which uses abstract schemata is the most efficient one" (218).170 Next we will turn to an example of Carnap uses his work on inductive logic to try and make progress in science by using his work on confirmation functions to lay a single foundation for a theory of estimation in theoretical statistics. 168 See, for example, Carnap discussion of using quadrangles to cover a circular area (LFP 210). 169 Carnap, however, adds in parentheses that "This does, of course, not mean that this final aim is also the motive in every activity in logic or science" (LFP 217). 170 Also see Carnap's discussion of a trade-off between "extroverts", or those that prefer the complexity of nature, and "introverts", or those that prefer the abstraction of schemata; in particular, Carnap says "it is clear that science can progress only by the cooperation of both types, by the combination of both directions in the working method" (218-9). 97 4.3. From Confirmation to Estimation Functions 4.3 From Confirmation to Estimation Functions For Carnap, the importance of constructing an inductive logic on the basis of an adequate concept of degree of confirmation, e.g., a function like c∗, is not merely to explicate the logical concept of probability. We may also be interested in explicating other inductive concepts, including concepts of relevance, estimation, information and even of entropy which are conceptually tied to the logical concept of probability.171 Specifically, once an adequate explicatum for the logical concept of probability is found, this explicatum can then be used to construct adequate explicata for a host of related inductive explicanda. As Carnap puts it, the concept of degree of confirmation, understood as an explicatum for the logical concept of probability, is "the fundamental concept of inductive logic" (513). It is in this wider sense of explication – of explicating an entire system of concepts based on the explication of a single concept at the conceptual core of this system – that Carnap's work on finding an adequate quantitative inductive logic is an explication of inductive reasoning. Yet finding an adequate explication of logical probability which could then be used to explicate an entire system of inductive concepts is not a trivial task; as Carnap puts it, we can only find such a concept by providing the right sort of reasons for adopting it, e.g., reasons like "the fact that in many actual or imagined knowledge situations the values of c are sufficiently in agreement with the inductive thinking of a careful scientist" (540). Turning our attention to the problem of estimation in theoretical statistics, Carnap says that the state of the field of theories of estimation, at least from the point of view from "treatises on probability and statistics" is a startling spectacle of unsolved controversies and mutual misunderstandings, all the more disturbing when we compare it with the exactness, clarity and possibility of coming to a general agreement in other fields of mathematics. (LFP 513) The problem of estimation is basically the problem of finding an adequate estimation, based on both an estimation function and past observations, of the value of some unknown physical quantity, or rather, an estimate of some parameter representing a physical quantity.172 As Carnap puts the point, one can think of an estimate given by an estimation function for a physical quantity as a sort of guess – not an arbitrary guess but rather a reasonable guess (512). 171 I don't discuss Carnap's work on relevance in any detail in this dissertation; see LFP, chapter VI. 172 For example, see Fisher (1922). 98 4.3. From Confirmation to Estimation Functions Once found, such a concept will not only play an important role in everyday scientific activity but also a foundational role in any theory of rational decision making (LFP §§49-51; also see chapter 5 of this dissertation). But the problem, at least according to Carnap in 1950, is that there is no general theory of statistical estimation. Rather, as Carnap notes, there are instead several, competing, theoretical accounts of statistical inference and estimation, including R. A. Fisher's work on maximal likelihoods, Abraham Wald's work on statistical decision functions, Jerzy Neyman and Egon Pearson's statistical hypothesis testing relative to type I and II errors and Neyman's confidence intervals (515-518). Moreover, these statistical accounts of estimation functions are all based on a frequentist or statistical concept of probability. But then "[w]hy did statisticians," asks Carnap, "spend so much effort in developing methods of estimation, i.e., methods not based on a [logical concept of probability CFF]?" (518). The short answer, according to Carnap, is that because of the historical association of a principle of indifference (or principle of insufficient reason) with the logical concept of probability – a principle found to lead to contradictions by scientists as early as Carl Gauss – only a theory of estimations based on a frequentist concept of probability could possibly be adequate (518). In response, Carnap articulates two possible options. The first is to suppose that no adequate quantitative inductive logic will be found; then the methods developed by Fisher, Neyman, Pearson, and Wald or new methods of a similar nature are presumably the best instruments for estimating parameter values and testing hypotheses. They are ingenious devices for achieving these ends without making use of any general explicatum for [the logical concept of probability CFF], as far as the ends can be achieved under this restricting condition. (518) Alternatively, however, suppose that an adequate inductive logic is found, viz. an inductive logic which does not depend on any unrestricted application of the principle of indifference. Then, according to Carnap, the main reason for developing independent methods of estimation and testing would vanish. Then it would seem more natural to take the degree of confirmation as the basic concept for all of inductive statistics. (518) This question of which of these two alternatives is more likely is connected to a problem Carnap had raised a few pages earlier in LFP. The unsatisfactory state of the theory of estimation is due to a problem that besets most theoretical fields in science: "any procedure of estimation depends upon a choice, which is a matter of practical decision and not uniquely determined by purely 99 4.3. From Confirmation to Estimation Functions theoretical, logico-mathematical considerations" (514). As Carnap points out, many procedures of science involve such a choice, like choosing a geometry for physical space. However, what is advantageous about the question of whether we can find an adequate inductive logic that could be used as a basis for a theory of estimation is that, says Carnap, "only one fundamental decision is required" (514). As Carnap continues to say: As soon as anybody makes this decision, that is to say, chooses a concept of degree of confirmation which seems to him adequate, then he is in the possession of a general method of which makes it possible to deal with all the various problems of inductive logic in a coherent and systematic way, including the problems of estimation. Thus this method helps to overcome what seems to me the greatest weakness in the contemporary statistical theory of estimation, namely, the lack of a general method. (514) This is an insightful passage into Carnap's understanding of the theoretical issues at hand. By reconstructing the results of theoretical statistics and probability as depending on the choice of a single inductive concept of degree of confirmation, Carnap suggests that one could provide a grand foundation for all of statistics and probability – a general method capable of clarifying and systematizing inductive reasoning, including reasoning about how to construct non-arbitrary "guesses" or estimations for physical, but unknown, quantitative properties. It is in this way that Carnap hopes to contribute to the foundations of theoretical statistics. Now that we have a better sense of the potential import for Carnap's work on estimation functions, I next turn to the details of his work on estimation functions. Suppose, firstly, that R(u) is a discrete random variable representing the result of observing some physical magnitude, relative to the physical input u, which ranges over the possible values r1, r2, ..., rn and, secondly, that one of the ri is really the actual value of this physical magnitude. Provided we have evidence for previous instances of R(u), call it e, and that the sentences h1, ..., hn denote the (logically exclusive) hypotheses that the actual value of the unknown quantity is r1, ..., rn, respectively, then Carnap suggests we can define the estimate of R(u) as a weighted mean (where the weights are confirmation values). More specifically, assuming 'e' logically implies 'h1 ∨ * * * ∨ hn', the estimate e is defined as follows, (see D100-1) e(R, u, e) = n∑ i=1 [ri × c(hi, e)]. (4.2) Importantly, as Carnap will later show in Carnap (1952), this definition can be used to define 100 4.3. From Confirmation to Estimation Functions unique estimation functions based on a particular class of confirmation functions. Specifically, supposing we had a continuum of different confirmation functions to choose from and that we could define a unique estimation function based on each such confirmation function, we could then investigate how well particular estimation functions behave for different "states of the universe," or to use a more formal mode of speech, for different state descriptions. Of course, then Carnap would need to have some notion of how "reliable" different estimation functions are. Although Carnap considers several different ways of explicating such a notion, I will cut to the chase and quickly discuss the explicatum Carnap focuses on (see LFP §100B and §102). Assuming that r is the actual but unknown value of the physical quantity measured by R(u), the error of the estimate e, or v, is defined as v(R, u, e) = e(R, u, e)− r. (4.3) As is standard (because the estimation of this error term is always zero), Carnap takes for the explicatum of the reliability of estimation functions the estimate of the squared error, f2, i.e., the weighted average of these error functions, squared, where the weights are given, like above, in terms of confirmation functions.173 Importantly, the estimation of squared error is useful if the actual value of R(u) is genuinely unknown. However, one can easily calculate a value of r relative to some fixed state description. For example, suppose we assume that a single state description in L is the actual one, then, if R(u) is a measure of the frequency of individuals in that state description which hold of M , r is simply the actual frequency of M 's in this state description. Then instead of explicating the reliability of an estimation function in terms of f2, we can instead use the mean squared error, m2, defined relative to r to investigate the relative reliability of estimation functions relative to a fixed, completely known, state description.174 In section 4.5 of this chapter, we will see that Carnap uses this notion of the mean squared error and his λ-system to try and find "optimal" estimation functions. This work constitutes, I argue, one of the clearest examples of how Carnap uses his work on inductive logic to solve a foundational problem and that this process resembles a kind of conceptual engineering activity. 173 Specifically, f2(R, u, e) =Df e(v2, R, u, e) = ∑ i[(e(R, u, e)− ri) 2 × c(hi, e)]. 174 Relative to our current observed sample of s-many individuals, the mean squared error of e is defended as m2(e, r) = v2 (Carnap, 1952, 56-59). 101 4.4. Carnap's Continuum of Inductive Methods However, before we can discuss that example we first need to examine the λ-system in detail. 4.4 Carnap's Continuum of Inductive Methods Carnap tells us in the opening pages of CIM that he is concerned with two kinds of inductive inference in the sciences. The first are inductive judgments whether to "accept or reject" a hypothesis based on prior and/or new evidence (CIM 3).175 More specifically, according to Carnap, an individual X "possesses" a method of confirmation if they can determine – even if "not necessarily by explicitly formulated rules" – some confirmation function c(h, e) such that the values of this function "represent" to X their degree of confirmation for the hypothesis h given the evidence e (4). The second kind of inductive inference is just what we have been discussing above: namely, the estimation of the unknown value of some physical quantity (3-4). More specifically, an agent X "possesses" a method of estimation if they have some procedure for determining the values of the mathematical function e(rf,M,K, e) such that those values "[represent] to X the estimate of the rf of M in K on the basis of e" (4).176 Here rf denotes the relative frequency of some magnitude defined relative to M , the property of interest, and the class K the elements of which X has not observed and "is not described in e" (4). Ideally for Carnap, of all the possible c and e functions which practicing scientists could possibly "possess," we would find wide-spread agreement concerning their preferred inductive methods. Indeed, based on such a consensus, we would have more than enough reason to single out a unique confirmation function to serve as an adequate explicatum to construct an adequate inductive logic fit to guide most, if not all, rational decision making. This truly would be a kind of robot epistemology. Moreover, supposing we could pinpoint the inductive disagreements between scientists, Carnap suggests that this situation would be similar to a controversy surrounding the nature of deductive inference between intuitionist and classical mathematicians, i.e., a debate "based on different interpretations of the basic logical terms rather than as genuine differences in opinion, i.e., incompatible answers to the same question" (CIM 6).177 Carnap, however, 175 The language of acceptance/rejection is one Carnap drops in his later work, especially as a response to philosophers of science like Henry Kyburg Jr. who treat such notions as a part of epistemology and in need of explanation by articulating normative rules of detachment; for example, see Kyburg's article in Swain (1970), Carnap (1968b) and, for a general overview, Hilpinen (1968). 176 Generalizing from the example based on the random variable R(u) above. 177 Of course, the latter controversy is at the center of LSL. 102 4.4. Carnap's Continuum of Inductive Methods is not so sanguine that the inductive differences between practicing scientists can be so easily explained away. "If we look at the contemporary situation in the field of logic, the theory of inductive inferences," says Carnap,178 we notice the remarkable fact that a variety of mutually incompatible inductive methods are proposed and discussed by authors of theoretical treatises and applied in practical work by scientists and statisticians. None of the authors is able to convince the others that their methods are invalid. I shall not try to decide the difficult question whether the situation in inductive logic is in this respect fundamentally different from that in deductive logic, including mathematics. [...] Whatever the solution of this philosophical problem may be, it seems to me that there can hardly be any doubt about the historical fact that, as matters stand today, the differences of opinion concerning the validity of inductive methods go much deeper and are much more extensive in their scope than the differences in deductive logic. (CIM 5-6; my emphasis) That different scientists prefer different, incompatible inductive methods – methods sometimes central to their understanding of scientific method and inference – is, for Carnap, a foundational problem in the sciences which is in need of philosophical attention. In deductive logic and mathematics, it seems only a minority of mathematicians reject the classical notion of logical implication (and consequence) in favor of intuitionist and other non-classical logics. Indeed, in LSL, Carnap constructs two different logical systems, one classical and the other intuitionist, in order to evaluate and compare the logical consequences of each system; however, he takes for granted the full power of classical mathematics to do so.179 That's because the aim wasn't to convert those logicians who rejected the principle of the excluded middle; rather, the point was to illustrate how a plurality of logical systems could be constructed. With inductive logic, it seems we have the opposite problem. There already is a plurality of inductive methods, but it seems like there is little or no consensus regarding which particular methods are more or less satisfactory than the others. Troubling for Carnap, however, is the idea that this problem about the non-consensus of inductive methods goes deeper than the worry for deductive reasoning for which once a single notion of, say, logical consequence is adopted then alternative ways of spelling out the notion of logical consequence becomes "meaningless" (6). Instead, for the case of inductive reasoning, it seems that two scientists worried about the same hypothesis h and evidential basis e can both adopt their own inductive methods, methods which 178 Note that "incompatible" inductive methods is a technical term for Carnap. The functions c1 and c2 are incompatible if, all relative to the same object language L, there exists one pair h-e such that c1(h, e) = c2(h, e) (CIM, 5). Incompatible estimation functions are defined similarly. 179 For more details about this latter claim see, for example, Friedman (2009). 103 4.4. Carnap's Continuum of Inductive Methods both parties consider to be perfectly reasonable, but nevertheless end up recommending entirely different confirmation values for h given e using their methods (6-7). The worry is that there is an inherent indeterminacy or subjectivity to the very nature of inductive decision making. But what is the source of this subjectivity for inductive reasoning or judgments? On the one hand, Carnap suggests that perhaps these inductive differences are "merely a matter of historical contingency due to the present lack of knowledge in the field of inductive logic" (7). Indeed, if this were the case "it would be conceivable," says Carnap, "that at some further time, on the basis of deeper insight, all will agree that a certain inductive method is the only valid one" (7). The initial stumbling block of there being scientists who find it reasonable to prefer competing inductive methods will eventually be overcome once we discover an inductive method which all scientists could simultaneously endorse (e.g. like the inductive method corresponding to c∗). Carnap presumably has in mind here scientists like Keynes and Jeffreys who argue that, just as deductive logic is to be based on general epistemological principles or postulates, an objective, or rational, probabilistic relation p(e/h) relative only to the meaning of two propositions e, h is similarly based on general epistemological principles or postulates. However, unlike for classical logic for which propositions or sentences are assumed to be truth-functional, Keynes argues – especially in cases from the use of probabilistic reasoning in law and gambling – that not all probabilities have sharp, quantitative values, while Jeffreys appeals to controversial symmetry principles to guarantee that probabilities do, as a matter of epistemic stipulation, have sharp, well-defined values.180 Alternatively, one can embrace the subjective nature of inductive reasoning, for example, by assuming that this rational probability relation p(e/h) is also a function of some mind or agent; e.g., as I. J. Good puts it, "you."181 Specifically, for probabilists like Ramsey, de Finetti or Good, terms like "degrees of belief," "belief," or "judgment" are treated as primitive notions relative to a subject or agent; probabilities are then to be measured or elicited relative to a system of bets, i.e., a system explicitly defined relative to "beliefs" or "judgments" underlying the actions, preferences or expectations of a subject or agent. Ideally, as a product of the subjective nature of scientific judgment, objective, inter-subjective 180 Nagel (1939) also argues against the idea that degrees of confirmation or belief need always be quantitative. 181 According to Good, beliefs are a function of three variables: the propositions denoting what is "believed" and "assumed" and, thirdly, "the general state of mind [...] of the person who is doing the believing" (1950, 2). This person, says Good, is who "you" describes. 104 4.4. Carnap's Continuum of Inductive Methods relationships are then to be shown to hold for certain kinds of subjective probabilities (even if the subjective element of probabilistic judgments is never entirely eliminated).182 On the other hand, even though Carnap also considers himself to be constructing an objective concept of probability and, later, suggests confirmation values can be fruitfully interpreted with respect to a system of bets (see my next chapter), Carnap cannot simply ground inductive logic, as a piece of logic, with general epistemological principles or the empirical facts about the subjective judgments of agents. For to do so, presumably, would be to violate one of the central strictures of Wissenschaftslogik, viz. that a sharp line must be drawn between logical and empirical questions, a line which epistemological theories frequently blur. But this is why it is important to clearly distinguish between the explicandum and an explicatum: whether or not inductive methods are somehow inherently subjective or piece-meal is a thesis about inductive reasoning qua explicandum and not as an explicatum. Thus when Carnap considers the possibility of whether "the multiplicity of mutually incompatible methods is an essential characteristic of inductive logic" and says that, if so, "it would be meaningless to talk of "the one valid method"," Carnap is talking not about inductive logic as a piece of logic but rather about the inductive practices of scientists (CIM 7).183 Moreover, it is in virtue of this incommensurablity between inductive methods that Carnap then suggests that the decision to adopt an inductive method over others is a practical and not a theoretical matter. More specifically, Carnap says this rejection of any talk about "the one valid method"184 [...] does not necessarily imply that the choice of an inductive method is merely a matter of whim. It may still be possible to judge inductive methods as being more or less adequate. However, questions of this kind would then not be purely theoretical but rather of a pragmatic nature. A method would here be judged as a more or less suitable instrument for a 182 Deriving some kind of objective, or rational, results for subjective probabilities is the entire point of so-called "Dutch book" and representation theorems more generally. Whether such results are "normative" in any strong sense, however, remains a controversial question (see Meacham and Weisberg, 2011). 183 Indeed, even today prospects for a truly general theory of inductive inference are dim. Although Carnap was aware of the similarities between finding effective or computable solutions to inductive and deductive problems, Putnam (1963) and, more recently, especially Kelly (1996) have done a great service by clarifying these similarities. For recent attempts that study the limitations of using probability theory to capture inductive problems, see Earman (1992); Norton (2003; 2010). Alternatively, statistical and machine learning theory provides a slew of fruitful, technical frameworks for investigating the nature of inductive in a more piece-meal fashion, e.g., see Bishop (2006); Hastie et al. (2010); Ortner and Leitgeb (2009); Vapnik (2000). 184 Carnap continues: " I shall not try to discuss this problem here, still less try to solve it; but I may indicate that at the present time I am more inclined to think in the direction of the second answer" (CIM 7). The "second" answer is in reference to this idea that there is no one valid inductive method – I invite the reader to read this as an implicit nod toward satsificing. 105 4.4. Carnap's Continuum of Inductive Methods certain purpose. (CIM 7) Although Carnap leaves open the possibility of perhaps discovering this "one valid inductive method," he never tells us how we would know it if we stumbled across it and throughout the rest of the text of CIM he discusses the decision to adopt inductive methods, characterized by the task to construct a continuum of inductive methods, in instrumental terms. Carnap's λ-system Part I of CIM is concerned with the provision of "a systematic survey of all possible inductive methods" in the form of a parameterization of confirmation functions which a scientist can use to help them make more informed decisions – it is a means to help explicate their inductive reasoning practices. Really, Carnap distinguishes between two separate tasks. The first task, on the one hand, is to provide an ordering of inductive methods with respect to a linguistic parameter of a logical system. The second task, on the other hand, is just the inverse of the first: if everything turns out the way it is supposed to then from any given value of this linguistic parameter it should be possible to uniquely determine an inductive method (7-8). It will turn out that the λ parameter from Carnap's λ-system, a restricted continuum of inductive methods, satisfies both of these tasks.185 Moreover, aside from the fact such a parameterization would then allow us to use the standard techniques from calculus and analysis to further investigate these inductive methods, such a system, says Carnap, would enable us not only to compare any two of the historically given methods in a more exact way than was possible so far but also to study new methods quantitatively. It would be easy to discover one or several new methods which fulfil any given condition or which are most useful for a specific purpose. (CIM 8) Carnap restricts his investigation to LπN , which is the same as LN above except with the added restriction that the only predicates are π-many one-place predicates.186 Carnap then introduces the following technical notions required to construct his λ-system. The Q-properties, as Carnap calls them, represent a collection of κ many exclusive and exhaustive predicates representing, so to speak, all the possible ways these π-many unary predicates can hold of the 185 The question of how to generalize Carnap's λ-system is by no means trivial; see Good (1965); Kuipers (1978); Zabell (2005). 186 π is here a positive natural number assumed to be finite and larger than one. 106 4.4. Carnap's Continuum of Inductive Methods N individuals in a given state description.187 Then if M is any molecular property, i.e., a property formed using any of the π many predicates using the usual connectives, all occurrences of M in a sentence in LπN can be replaced by a disjunction of particular Q-properties or negations of Q-properties. Lastly, the number of the Q-properties in this disjunction required to replace M is called the logical width of M , which is denoted by w. Carnap then restricts his investigation not to all possible inductive methods, but rather to those methods represented by regular confirmation functions, i.e., those c functions which satisfy conditions C1-5 in CIM.188 Nothing of importance will be lost if, from here on out, we discuss Carnap's λ-system in terms of a specific example instead of adopting Carnap's own, sometimes obscure, technical vocabulary. Let us interpret the language system LπN as describing an urn with N many balls and κ = 3 many independent color properties: 'QB', 'QG' and 'QR' for blue, green and red balls. If some sentence e is an evidential statement describing any sample of s many balls from the urn (where s < N), eQ is a conjunction of s many Q-properties applied to the s many balls in our sample.189 For example, suppose that from a sample of six balls – call them 'b1', 'b2', ..., 'b6' – that we have the following evidential statement, eQ = 'QR(b1) ∧QG(b2) ∧QB(b3) ∧QB(b4) ∧QB(b5) ∧QB(b6)'. In other words, in our sample – in sequential order – a red ball, a green and four blue balls were pulled from the urn. So relative to this sample of size s = 6, we can calculate the relative frequency for each Q-property: if si (i = 1, ..., κ) represents the number of balls in the sample that are Qi, then si/s is the relative frequency of Qi-successes in the sample of size s. In our example, with a slight abuse of notation, we have the si terms sR = 1, sG = 1 and sB = 4, and 187 More specifically, if there are κ-many Q-properties, 'Q1', 'Q2', ..., 'Qκ', where κ = 2π, are just those properties formed by the conjunction of all π primitive predicates closed under negation. For example, if there are only two primitive predicates, P1 and P2 there will be κ = 2π=2 = 4 Q-properties: Q1 ='P1 ∧P2'; Q2 ='¬P1 ∧ P2';Q3 ='P1 ∧ ¬P2'; and Q4 = '¬P1 ∧ ¬P2'. 188 See CIM, p. 42. For the definitions themselves, see page 12; a slightly abbreviated list is the following. C1: If h and h′ are logically equivalent, c(h, e) = c(h′, e). C2: If e and e′ are logically equivalent, c(h, e) = c(h, e′). C3: c(h∧h′, e) = c(h, e)× c(h′, e∧h). C4: If e∧h∧h′ is logically false, then c(h∨h′, e) = c(h, e)+ c(h′, e). C5: 0 ≤ c(h, e) ≤ 1. 189 It is merely an artifact of our toy example that we don't employ molecular properties to describe our sample so that e is the same as eQ. 107 4.4. Carnap's Continuum of Inductive Methods so the relative frequencies of the red, green and blue balls in our sample are sR s = 1 6 , sG s = 1 6 , sB s = 4 6 . All this notation will come in handy in just a moment. However, we first need to introduce just a bit more technical terminology before we can talk about Carnap's λ-system. Besides requiring that the confirmation functions in our system be regular, Carnap imposes five more conditions on these functions, C6-10. Let hi (i = 1, . . . , κ) designate the hypothesis that the next ball we see from the urn is the ith color. For example, again with a slight abuse of notation, hB says the next ball sampled from the urn will be blue. Finally, let ei be a transformation of the sentence eQ such that all Q-property conjuncts in eQ not equal to Qi are replaced with ¬Qi. For example, using our sample eQ from above, eB = '¬QB(b1) ∧ ¬QB(b2) ∧QB(b3) ∧QB(b4) ∧QB(b5) ∧QB(b6)'. It would be nice to ignore the order in which we see both the balls and color properties so that the numbers ⟨sB = 4, s¬B = 2⟩ capture, so to speak, all the information contained in our sample insofar as it is expressed by eB.190 In essence, this is what conditions C6-9 accomplish. Simplifying a bit, condition C6 states that, for all sentences hi and ei, the value of c(hi, ei) is the same for all the systems LπN , independently of N (given that i < N) (13). Conditions C7, C8 and C9 then make several symmetry assumptions about the individual constants and Q-predicates for our inductive system. If c is in the λ-system, then C7 is just the assumption that c is symmetrical and C8 states that c is symmetrical with respect to permutations of the Q-properties (14). Lastly, C9 states that no information is lost with respect to c, relative to any molecular property M and the hypothesis hM , when we transform eQ into eM (14).191 Together, conditions C1-9 characterize all the confirmation functions in the λ-system. Specifically, Carnap argues in §4 of CIM that for any such function c in the λ-system, there exists (relative to LπN and some Q-property Qi) a characteristic function G such that G(κ, s, si) = 190 Of course, s = sB + s¬B . 191 In symbols, c(hM , eM ) = c(hM , eQ). 108 4.4. Carnap's Continuum of Inductive Methods c(hi, ei), for i = 1, ..., κ.192 Thus if different inductive methods are represented by different c-functions, then the values of these functions, with respect to the values s, si and κ, are given by the values of some characteristic function G(κ, s, si) (15-6).193 In §5, Carnap then provides a proof that for any given characteristic function G, that function uniquely determines the confirmation values for all sentence pairs h, e in the language system LπN (granted that e is not logically false).194 A similar result then also holds for estimation functions: all the estimation functions based on confirmation functions in the λ-system are also determined by the corresponding characteristic function (see CIM §6). This result marks the completion of the first task I mentioned above. By comparison, the second task is a bit more complicated. To recap what Carnap has done so far, it has been shown how different characteristic functions, G, G′, G′′, ..., each uniquely characterize a different inductive method in the sense that each such function uniquely determines the confirmation values for some confirmation function c in the λ-system for any hypotheses we can form about our urn of colored balls. Now the problem is to somehow parameterize this collection of characteristic functions with a single, logical parameter λ. Then, for some fixed logical system LπN , it would be possible to catalog different inductive methods based on this collection of G-functions which we could use, for example, to calculate the probability that the next ball in the urn, given the sample eB, will be blue. In creating such a parameterization, Carnap says he will be "liberal in the admission of inductive methods to the projected λ-system" while also "exclud[ing] those methods which practically everybody would reject" (24). He does this by distinguishing between an empirical and a logical "factor" and then defines a confirmation function relative to λ, where λ reflects the different "weighting" given to these two factors. More generally, Carnap points out that if we could catalog G-functions relative to some variable x as a function of two quantitative parameters of G, say u1 and u2, where u1 < u2 and 192 See pages 14-15; the relevant results are (4-5) through (4-8). I am well aware of the fact that both Kemeny and Savage pointed out to Carnap that κ is independent of this function and G need only be defined in terms of s and si; see Kemeny (1963), the manuscript of which Carnap originally received in April 1954, for a simplification of Carnap's result using recursive functions, especially pp. 724-731. 193 However, as any such characteristic function G is defined from R3 to (0, 1), there is no initial restriction on these functions; indeed, there will be G-functions which do not correspond to c-functions in Carnap's λ-system (CIM, 18). 194 For the technical details, see pages 17-18 and results (5-2) through (5-4). 109 4.4. Carnap's Continuum of Inductive Methods x ∈ (u1, u2), then we can always re-express x as a weighted average, x = W1 * u1 +W2 * u2 W1 +W2 , (4.4) where W1 and W2 are real-valued "weights" for the parameters u1 and u2.195 Returning to our example eQ above, suppose that we are interested in determining a value for the hypothesis hB that the next ball will be blue by first defining a continuum of inductive methods and then choosing one of these methods to determine a value for c(hB, eB). Carnap reasons basically as follows. Supposing that inductive methods can be characterized by the weight they give to a logical and an empirical factor, then if no weight is given to the logical factor then only the empirical factor matters. Let the empirical factor be the relative frequency; for our example it is the relative frequency of blue balls in our current sample, sB/s = 2/3. However, if all the weight is given to the logical factor, our empirical observations should have no influence on the probability value. Let the logical factor be the relative width, w/κ. In our example, because the property blue is a Q-property, the width of QB equals one, so the logical width is just 1/κ = 1/3.196 Substituting in both these logical and empirical factors for our end-points, i.e., u1 = sB/s and u2 = 1/κ, and the parameter x with the value of G(κ, s, sB), all that is left to do now is determine the weights W1 and W2. Carnap chooses the sample size s as the weight of the empirical factor. This choice, Carnap says, "requires no theoretical justification, since it does not involve any assertion" (27-8). Rather, all it requires is a practical justification; namely, that this choice "leads to an especially simple form of the parameter system" (28). The logical weight is then simply assumed to be the inductive parameter, λ.197 Plugging in our new values of u1 = sB/s, u2 = 1/κ, W1 = s and W2 = λ in equation 4.4, we have G(κ, s, sB) = (sB + λ * 1/κ)/(s+ λ) = (4+λ/3)/(6+λ). As we change the value of λ we get a different value of G(κ, s, sB), i.e., a different value of c(hB, eB). Generalizing, the result is the following equation which characterizes a continuum of inductive methods with respect to the parameter λ, for 0 < λ < ∞ (λ = 0 and λ = ∞ are special, 195 Assuming both (i) W1 + W2 = 1 and (ii) for the distance terms d1 = |x − u1| and d2 = |x − u2|, it is the case that d1/d2 = W2/W1. 196 Condition C10 is just the assumption that w is equal to one; see pp. 26-7. 197 Actually, Carnap treats λ as a function of κ, s and si. 110 4.4. Carnap's Continuum of Inductive Methods limiting cases):198 G(κ, s, si) = si + λ/κ s+ λ . (4.5) The smaller the logical weight, the more important the empirical factor, here the relative frequency. Thus as λ approaches 0, the value of G will approach the value si/s. The larger the value of λ, the empirical factor becomes less important and all G values approach a fixed limit, regardless of whether new observations are made.199 Provided, along with a new condition C11, that conditions C1-10 hold, Carnap then goes on to show that the G-values determined by the above equation relative to the parameter λ also determine the values of particular confirmation functions for the hypotheses hi (or hM ) given the evidence ei (or eM ) and fixed values of s and si (30).200 Finally, Carnap provides a general method to define, for any state description k in the object language, the value of m(k) as a function of products of G-values.201 A confirmation function cλ, relative to λ, is then defined as that function based on this measure function. In this way, each value of λ, in the interval (0,∞), characterizes a specific confirmation function, a function which represents a unique inductive method. The equation we end up with is the familiar characterization of the λ-system: cλ(si, s) = si + λ/κ s+ λ (4.6) So now that we have a smorgasbord of inductive methods to choose from and investigate, how could we know which values of λ provide us with an adequate confirmation function? For Carnap the finding of an adequate cλ is not an isolated affair. However, when we have found such a function, we can then construct an inductive logic and along with it a theory for inductive reasoning in general, including reasoning about estimates of physical quantities using the function eλ, viz. that estimation function based on cλ. However, for Carnap, the decision 198 More generally, G(κ, s, si) = (si + λ * w/κ)/(s+ λ). The λ-values 0 and ∞ are not strictly speaking in the λ-system because they violate C1-9, e.g., cλ=0 isn't actually a regular confirmation function, but rather both c0 and c∞ are defined by limiting conventions; see CIM §§13-14. 199 Indeed, this is the problem with Wittgenstein's inductive method in the Tractatus which basically says all state descriptions have the same m-values; see CIM, pp. 39-40. 200 Condition C11 states that, if c is in LπN , the quantity [s * c(hi, ei) − si]/[1/κ − c(hi, ei)] remains invariant under changes to s, si and the sentences hi and ei (29-30). 201 More specifically, the measure of any sentence h in our logical system can be expressed as a function of the measures of all those state descriptions which hold of h and so m(h) equals a product of the G-values for measure of these state descriptions, for a fixed λ, see CIM §10. 111 4.4. Carnap's Continuum of Inductive Methods to adopt a particular value of λ is not to be justified on the basis of some epistemological or metaphysical principle or argument; indeed, Carnap tells us that it "is fundamentally not a theoretical question" because theoretical questions are answered in the form of assertions, i.e., as true or false statements which, if true, "demands the assent of all" (53). Instead, the answer consists in a practical question to be made by X. A decision cannot be judged as true or false but only as more or less adequate, that is, suitable for given purposes. However, the adequacy of the choice depends, of course, on many theoretical results concerning the properties of the various inductive methods; and therefore the theoretical results may influence the decision. Nevertheless, the decision itself still remains a practical matter, a matter of X making up his mind, like choosing an instrument for a certain kind of work. (CIM 53) The decision to adopt a value of λ, Carnap tells, is practical – it is analogous to choosing an instrument to accomplish some task. Consequently, the choice of a value of λ is adequate, it seems, insofar as the resulting confirmation (or estimation) function satisfies our given purposes, like whether it provides us with an easy to use inductive logic or whether the resulting estimation functions satisfy any number of methodological considerations. Moreover, the mathematical consequences of adopting cλ and eλ for a particular value of λ, for example, may also influence our decision if we can't use it to derive, e.g., the statistical theorems (or if those theorems turn out to be trivial, e.g., when λ = ∞). Nevertheless, the decision to adopt a particular value of λ itself is not a matter of right or wrong but only of better or worse. For example, Carnap tells us that the agent X will have to decide whether or not λ is assumed to be a function of κ and the answer to this decision – either 'yes' or 'no' – represents what Carnap calls methods of the "first" and "second" kind, respectively (see CIM §§15-16). Either choice will come with its own theoretical and practical consequences, e.g., one method may be more mathematical tractable than the other. Disambiguating Practical Motivations Before I discuss Carnap's work on finding "optimal" values of λ, we should take a moment to pause and discuss another place where Carnap explicitly draws a distinction between the practical and theoretical. While Carnap was working on CIM between 1949 and 1951 he published "Empiricism, Semantics and Ontology," or simply ESO.202 It is in ESO that Carnap attempts 202 References are to the slightly altered re-print of ESO in the second, 1956, edition of Meaning and Necessity. 112 4.4. Carnap's Continuum of Inductive Methods to intervene in a debate between professional philosophers, whom we may want to call "realists" and "anti-realists" today, concerning whether abstract entities, like natural numbers or fictional names, "really" exist or not. According to Carnap, such ontological questions arise due to a failure to distinguish external questions about the ontological status of the terms in a linguistic framework and internal questions about the meaning of terms within that framework, i.e., questions answerable in terms of the semantical resources of that linguistic framework. For example, suppose that the linguistic framework in question is the familiar "thing"language most of us implicitly adopt on a daily basis, i.e., that the world is composed of things like chairs, neutrinos and corporations. Then, at least according to Carnap, any external question about what it really means for a thing in this thing-world to exist "cannot be solved because it is framed in the wrong way" (207). Indeed, questions about whether the entity x is "real" or not should not be recast as questions about whether one believes in the reality of x but rather as whether x is an "element" of the thing language or not. Of course, once a framework is adopted, at least in the sense that we make the practical decision to start using that language, this framework can be used to frame our experiences, including our own reports about the propositional attitudes we experience. Nevertheless, "the thesis of reality of the thing world," says Carnap, "cannot be among these statements, because it cannot be formulated in the thing language or, it seems, in any other theoretical language" (ESO 208). More specifically, linguistic frameworks are composed of a number of rules specifying the formation and interpretation of statements, including what it means to "accept" or "believe" such statements and, moreover, one can change between linguistic frameworks simply by choosing to adopt a new system of rules to frame, from that moment onwards, all of one's evidential and/or theoretical statements. It is in this sense that the decision to "adopt" a linguistic framework is a practical rather than a theoretical matter (e.g. see ESO 217-8). In other words, Carnap's distinction between external and internal questions (relative to a particular linguistic framework) is a particular example of Carnap's more general distinction between the practical and theoretical: external questions are examples of questions better diagnosed as practical questions about the decision to adopt a linguistic framework and internal questions are examples of theoretical questions answerable within a framework. So the external, or practical, choice of a linguistic framework seems to be an all or nothing affair, or what I 113 4.4. Carnap's Continuum of Inductive Methods will call coarse-grained change: from a host of alternative linguistic forms, an agent will choose one, and only one, system of rules; specifically, rules which will then be used to "frame" their statements and expressions. Nevertheless, Carnap tells us that203 [t]he decision of accepting the thing language, although itself not of a cognitive nature, will nevertheless usually be influenced by theoretical knowledge, just like any other deliberate decision concerning the acceptance of linguistic or other rules. The purposes for which the language is intended to be used, for instance, the purpose of communicating factual knowledge, will determine which factors are relevant for the decision. The efficiency, fruitfulness, and simplicity of the thing language may be among the decisive factors. And the questions concerning these qualities are indeed of a theoretical nature. But these questions cannot be identified with the question of realism. They are not yes-no questions but questions of degree. (ESO 208) So even though the choice to adopt a framework is binary, the process by which this choice is made is one of degrees. It is a delicate matter of weighing the consequences of formulating a logical system this or that way, including figuring out what kind of logical systems would make it easiest to provide it with an interpretation in line with our methodological considerations. Different assertions have to be weighed and compared, for example, assertions about what exactly one's aims and preferences are and the inclusion of certain theoretical assertions taken from outside the object language (e.g., results from computability theory) about how one could possibly accomplish subsets of those aims relative to these preferences. However, how this weighing is done is ultimately up to the agent; to adopt the language from chapter 2: there is no "meta"-framework we can appeal to inform our practical decisions regarding how best to formalize the thing-language from a "neutral" point of view. As an example, suppose an agent is deciding whether to keep using the thing-language or to instead adopt a process-language: in place of events defined in terms of both the qualities of things at a certain point in time over some duration of time, they would instead talk about events as the organic interactions between physical processes during some period of time.204 Notions of the "efficiency," "fruitfulness," or "simplicity" for both the processand thing-languages, it seems, would be assessments made of these languages from the perspective of some metalanguage (which very may well be the thing-language itself plus certain mathematical resources, like set theory) about the relative merits of the logical and empirical consequences that can be formulated in either of these languages. Perhaps, just to provide an example, someone attentive to Carnap's 203 Also see ibid., 221. 204 For example, along the lines of Dupré (2008). 114 4.4. Carnap's Continuum of Inductive Methods practical/theoretical distinction could reason as follows. Whereas the process-language lends itself naturally to the formulation of mathematical dynamical systems required by ecology, the thing-language instead lends itself to the creation of systems of partial differential equations required by Newtonian physics. Then, in a way analogous to how one scientist may prefer non-Euclidean to Euclidean geometry, depending on whether one is worried about biological or physical phenomena, a community of scientists may find it more useful or convenient to adopt the process-language instead of the thing-language. Carnap is fairly explicit that something like this is, in fact, possible: The acceptance or rejection of abstract linguistic forms, just as the acceptance or rejection of any other linguistic forms in any branch of science, will finally be decided by their efficiency as instruments, the ratio of the results achieved to the amount and complexity of the efforts required. (ESO 221) The processand the thing-languages, so to speak, provide us with different frames or lenses for conceptualizing the activities of scientists; however, for Carnap whether one of these frames is "correct" is an external question: it is to be answered in instrumental terms, e.g., by measuring the efficiency of each language as a ratio of the number of useful results to a measure of the complexity of the language itself.205 However, whatever the case, it would be, for Carnap, ultimately a practical decision whether to adopt a system of rules corresponding either to the thingor process-language. It is also a practical question, for Carnap, whether we would want to use a measure like the ratio of desired results to their complexity required as a measure of linguistic adequacy. However, once these decisions have been made by the agent they commit themselves to a new system of rules and these rules then "frame," so to speak, any and all theoretical assertions made by an agent. For Carnap, the sooner we recognize this lesson the sooner we can divert valuable cognitive labor away from worrying about traditional metaphysical questions to instead worrying about how to measure the efficiency, simplicity or fruitfulness of different linguistic frameworks, or even how to more efficiently draw out the theoretical consequences of different frameworks. Only then can we combat premature prohibitions against certain linguistic frameworks rather than just straightforwardly "testing them by their success or failure in practical use" (ESO 221). Indeed, to make such prohibitions about linguistic form, 205 Presumably, however, we would have to carry out such an investigation from either the thing or process language, for example, by measuring and comparing the efficiency of those scientists who opt to adopt the process-language in contrast to the thing-language. 115 4.4. Carnap's Continuum of Inductive Methods says Carnap, "is worse than futile;" it is positively harmful because it may obstruct scientific progress. The history of science shows examples of such prohibitions based on prejudices deriving from religious, mythological, metaphysical, or other irrational sources, which slowed up the developments for shorter or longer periods of time. Let us learn from the lessons of history. Let us grant to those who work in any special field of investigation the freedom to use any form of expression which seems useful to them; the work in the field will sooner or later lead to the elimination of those forms which have no useful function. Let us be cautious in making assertions and critical in examining them, but tolerant in permitting linguistic forms. (ESO 221; emphasis in original). If Carnap's motivation for introducing the practical and theoretical distinction in discussions about ontological commitment is to grant us freedom and tolerance in investigating linguistic forms, is the choice of λ practical in the same way we are free to adopt a language framework? Ostensibly, questions about how to choose a value for λ resemble an external question, i.e., a question about how to make coarse-grained decisions about one's entire inductive framework.206 In CIM, for example, Carnap clearly distinguishes decisions about whether to change values of λ between empirical investigations in contrast to adopting a new value of λ in order to fix a single inductive method which will then be used for all of one's empirical investigations during a period of time (CIM 54). It is the latter kind of decision which concerns Carnap.207 When an agent adopts a particular inductive method, understood as a logical concept of probability, they will apply it to all inductive problems, problems of confirmation for all kinds of hypotheses; of estimation for all kinds of situations [...]; of choosing a practical decision; etc. One inductive method is here envisaged as covering all inductive problems. (54; my emphasis) However, Carnap also acknowledges that when it comes to the inductive concepts used in scientific reasoning, it may be difficult for scientists to make such wholesale changes to their inductive intuitions.208 That is, in order to change a belief at will; good theoretical reasons are required. It is psychologically difficult to change a faith supported by strong emotional factors (e.g., a religious or political creed). (54-55) 206 In his contribution to Carnap's Schilpp volume Arthur W. Burks explicitly draws a parallel between Carnap's external/internal distinction from ESO and the choice, outside a system, of choosing a confirmation function versus the finding of confirmation values within a system; for Carnap's reply, see Carnap, 1963b, 979–982. 207 Rosenkrantz (1981), for example, suggests that once λ is chosen, it is chosen for life (Ch.1, §3, p. 4). 208 Although Carnap doesn't put it in the following terms, perhaps the situation is similar to those who have more Bayesian or Likelihoodist inductive intuitions versus those whose intuitions reside with more classical statistical hypothesis testing. 116 4.4. Carnap's Continuum of Inductive Methods Nevertheless, as with the decision to adopt a linguistic framework, Carnap tells us that the decision to adopt a value of λ "is neither an expression of belief nor an act of faith, though either or both may come in as motivating factors" (55). Instead, "[a]n inductive method," Carnap tells us, is rather an instrument for the task of constructing a picture of the world on the basis of observational data and especially of forming expectations of future events as a guidance for practical conduct. X may change this instrument just as he changes a saw or an automobile, and for similar reasons. If X, after using his car for some time, is no longer satisfied with it, he will consider taking another one, provided that he finds one that seems to him preferable. Relevant points of view for his preference might be: performance, economy, aesthetic satisfaction, and others. Similarly, after working with a particular inductive method for a time, he may not be quite satisfied and therefore look around for another method. He will take into consideration the performance of a method, that is, the values it supplies and their relation to later empirical results, e.g., the truth-frequency of predictions and the error estimates; further, the economy in use, measured by the simplicity of the calculations required; maybe also aesthetic features, like the logical elegance of the definitions and rules involved. (55) It is worth some space, I think, to try and unpack this passage. First, to return to our previous discussion, Carnap's appeal to the practical/theoretical distinction in ESO seems by in large to secure freedom and toleration against the elimination of linguistic forms due to dogmatic ontological restrictions. In the above passage, however, by suggesting that the choice of an inductive method, characterized by a value of λ, is practical, Carnap is instead showing how a non-dogmatic investigation of inductive methods can take place. Different values of λ, according to Carnap, end up representing different inductive methods, or instruments, useful for framing the inductive deliberations of an agent. So here the practical choice of a value of λ signifies a positive, or constructive project: for all the different values of λ, an agent can decide which value provides the best inductive instrument. Specifically, as a matter of practical choice, one will choose that instrument which best satisfies any number of pragmatic features, like whether the resulting instrument itself is parsimonious or easy to use (however, as Carnap intimated above, this choice may also be informed by any number of theoretical assertions).209 Carnap started off with a rather philosophical problem of figuring out how to study a continuum of inductive methods. He designed and constructed the skeleton of a logical system, like LπN , and then defined a class of confirmation functions by stipulating a number of requirements, 209 There is a certain similarity here between Carnap's talk of pragmatic and theoretical features influencing practical decisions with Kuhn's talk of scientific values influencing, but not uniquely determining scientific theory choice; for more on scientific values see Kuhn (1977; 1983) and, more generally, Douglas (2009). 117 4.4. Carnap's Continuum of Inductive Methods namely C1-11, that these functions must satisfy. The result is the λ-system; it is, as Carnap says, "an inexhaustible stock of ready-made methods systematically ordered on a scale" (55). Moreover, if an agent "feels," says Carnap, that the method he has used so far does not give sufficient weight to the empirical factor in comparison to the logical factor, he will choose a method with a smaller λ – a little smaller or much smaller, according to his wishes. On the other hand, if he wishes to give more influence to the logical factor and less to the empirical factor, he will move up his mark on the λ-scale. Here, as anywhere else, life is a process of never ending adjustment; there are no absolutes, neither absolutely certain knowledge about the world nor absolutely perfect methods of working in the world. (55) In this passage the choice of λ seems like an external question: it is a pragmatic matter how a scientist decides to adjust the value of λ as she uses the λ-system as an instrument for working with the world, as a guide for making non-arbitrary decisions. Yet we will see in the next section that Carnap adopts certain statistical notions, like that of a "biased" estimator, to show how for a fixed state-description certain values of λ are "optimal" in the sense that eλ provides us with the closest estimate of the "actual" values of some parameter. But surely, at least for Carnap, the question of whether λ is optimal must be an internal question: the notion of "optimality" is a technical notion definable with the semantics of the object language. So perhaps this is the kind of "blending" of the practical and theoretical which Carus and Stein suggest marks a break with Carnap's mature thought? I agree that it is a kind of blending of the practical and theoretical, but Carnap is cognizant of this blending and it is in no way fatal to his project. Firstly, as the discussion above about ESO makes clear, as long as we are not trying to reify λ itself there is no trouble to slip back and forth between external and internal questions when talking about choosing an adequate value of λ – or, likewise, between the formal and material mode of speech – if it helps the scientist to use inductive logic as an instrument or as a heuristic for the logician to design better inductive frameworks. Not even Carnap, in all his published writings, always clearly indicates when he is speaking informally at the level of pragmatics and methodology as opposed to stating a claim within a well-defined language framework (but, my oh my, does he try). Secondly, it may be helpful to distinguish (the terminology is my own), even if only as a matter of degree, the choice of a language framework, like semantic system like LN , as coarse-grained practical decision as opposed to the piece-meal modifications and extensions the logician makes to an already extant logical systems – these 118 4.4. Carnap's Continuum of Inductive Methods are fine-grained practical decisions. Thus the decision to modify the value of λ need not be considered a whole-scale, coarse-grained, change of semantical system but rather a fine-grained change to the "same" semantical system – both are practical changes but they differ in degree and severity of the changes being made. I suggest that we can think of these fine-grained changes to LN – especially when defining the λ-system, estimation functions, or semantic concepts of information and entropy – as the sort of design changes made to a hierarchical engineering design. The conceptual engineering framework becomes especially apt when we start to worry about how to apply a pure inductive logic: what we want to do with the logic is specified by something like an operational principle and as we make fine-grained changes to some pure inductive logic, these choices have repercussions for the other semantic concepts in the system: this is especially the case if we define all other inductive concepts on a single choice of a function cλ: as we change the value of λ, we likewise alter the meaning of all the other inductive concepts based on cλ. The question of what value of λ is adequate is now, in a certain sense, a design problem. Moreover, it is passages like the one quoted above that bring to mind Herbert Simon's notion of "satisficing"; it is this notion, rather than global optimization, which I suggest captures the sense in which a practical choice can, for Carnap, be satisfactory, efficient, fruitful or whatnot. For example, notice that we have spent the majority of this chapter merely cataloging all the different ways in which Carnap's construction of an inductive logic, from the c∗-function to the λ-system, can be construed as a series of practical decisions. First, (1) there is the choice of L, understood as an axiomatic system of logical calculus. Of course, this choice will be influenced both by practical considerations of computational complexity. Second, (2) there is the choice of a semantic interpretation for L, a choice which can again be split into logical and methodological, or empirical, considerations. More specifically, we now have a series of practical choices about how to specify this interpretation. For example: (2a) we can place restrictions on the interpretation of L, like the requirements of completeness and independence; (2b) there remain various decisions which have to be made about how to design and construct any number of inductive concepts based on an adequate concept of degree of confirmation, e.g., like estimation functions; and finally, (2c) we have to made decisions about how to define a class of adequate confirmation functions, e.g. a single function like c∗ or a system of functions, like 119 4.5. Finding Optimal Values of λ conditions C1-11. Lastly, (3) we have methodological decisions to make concerning how best to apply our interpreted inductive system. For example, if we want to use our inductive logic in decision theory or statistics, (3a) we will have to somehow coordinate adequate confirmation functions with the credence or credibility functions of ideal or actual agents. Moreover, (3b) we will have to supply methodological rules for applying our inductive logic, like the requirement of total evidence. This discussion is fairly schematic and abstract, but the point is this. It seems as though, according to Carnap, we can keep, for the most part, a series of choices about (1-3) fixed, save for a decision about how to define the most basic semantic concept for the entire inductive logic; namely, a choice for how to define a concept of degree of confirmation, cλ, in terms of the value of a parameter like λ. Exactly here (1-3) resembles a kind of hierarchy of design decisions which need to be made in order to construct a pure inductive logic that can be applied to a particular scientific purpose. All of this talk of practical and theoretical decisions, however, is far too abstract to do much philosophical work. Fortunately, we will not have to deal in abstractions for too much longer. Within the context of developing a more general method for theorizing about statistical estimation functions, Carnap himself shows how, through a logical investigation, "optimal" values of λ can be found. It is to this example which we now turn. 4.5 Finding Optimal Values of λ Besides studying the "internal logical character" of an inductive method, Carnap says in addition that we may confront it with a given series of events or a whole world, either the actual universe or an assumed one described in a given state-description, and examine how well it performs if it is applied to various parts of the world in order to obtain degrees of confirmation or estimates concerning other parts. (59) More specifically, Carnap uses his λ-system to investigate the performance of a continuum of estimation functions.210 Now it would take an empirical investigation, employing perhaps the estimate squared error of estimation functions, to study the performance of these functions relative to actual empirical predications made in scientific practice. Carnap does not engage in such an empirical investigation. Instead, as I discussed at the end of section 4.3, Carnap assesses the performance of estimation functions within a given state description; it is such an 210 A version of this section can be found in French (2015b). 120 4.5. Finding Optimal Values of λ investigation to which Carnap says is "of a purely logical nature" (59). Indeed, to engage in the empirical investigation above would require us to use, for example, estimations of squared error, to measure the performance of estimation functions but that means we would already be presupposing a particular inductive method to carry out these estimations. It is only through this purely logical investigation of assuming that a state description is true that we can investigate inductive methods "on a neutral basis without presupposing the acceptance of one of them" (60).211 Indeed, the purpose of Carnap's investigation of estimation functions on a "neutral basis" is to show that the preference among statisticians for "unbiased" over "biased" estimation functions is unwarranted. Specifically, Carnap shows that from a purely logical point of view, i.e., on the assumption that the state description k is the case, there exist, for large N , "biased" estimation functions with a smaller mean squared error relative to k than the mean squared error of a particular "unbiased" estimation function relative to k. Assuming that e is an estimate for the relative frequency rf and that r is the actual value of this frequency, then e is unbiased if, for any sample of size s, the mean value of e equals r and the bias of such a function is the difference between e and r (59). However, it turns out that there is only one unbiased estimation function given by Carnap's λ-system, viz., the function characterized when λ = 0, e0.212 This inductive method is none other than that method which tells us probability values should be equal to observed relative frequencies, i.e., what Reichenbach calls the "straight rule" (44).213 Indeed, Carnap notices that it is a consequence of Fisher's method of maximal likelihood (e.g. in Fisher 1922) that, if R(u) is the parameter for some relative frequency rf and we use Fisher's method to find the maximal likelihood of R(u) given our current evidence, the probability of R(u) must be given by the straight rule.214 It is in part for this reason, says Carnap, that statisticians, 211 On the downside, however, "by framing the problem as a logical question, our investigation must necessarily abstain from making any judgment concerning the success of an inductive method in the total actual world. A judgment of the latter kind is obviously impossible from an inductively neutral standpoint" (60). 212 Technically speaking, e0 is not actually in the λ-system at all as it is not a regular confirmation function (42). Instead, Carnap calls c0 and e0 "quasi-regular" functions meaning they can be characterized as the limits of regular functions as λ → 0 (42). 213 We will return to Reichenbach's work in the next chapter; for now all that matters is that Carnap notices that Reichenbach's so-called "rule of induction" (e.g. in Reichenbach, 1949, 446) "is essentially the same as the straight rule of estimation" (44). 214 For example, if a random variable x is expressed in terms of the parameter θ and xi is a sequence of samples from x, then where the posterior probability is given by P (θ|xi) the likelihood L(θ;E) is just P (xi|θ); for the differences between the principle of likelihood and the law of likelihoods, including how likelihoods may be used in scientific practice, see Edwards (1972); Hacking (1965); Sober (2008). 121 4.5. Finding Optimal Values of λ like Kendall (1948), prefer unbiased estimation functions (CIM 44).215 In other words, Fisher's general method is exactly the sort of empirical investigation of estimation functions which presupposes an inductive method, viz. the straight rule. The advantage that Carnap sees for his own "neutral" investigation of biased and unbiased estimation functions is that it does not presuppose any particular inductive method. It is this investigation which we will turn next and although the following couple of paragraphs are technical this discussion will be of use when we later reconsider Carnap's practical/theoretical distinction. The crucial, logical, assumption is that some state description k in LπN is assumed to be the case. Relative to k, we can then investigate a continuum of estimation functions relative to the parameter λ of the relative frequency of predicates M for all N individuals in k, or what Carnap calls rf. Specifically, for any observed sample of s many individuals, where the class K contains those Ns many individuals not yet observed, such that sM is the number of individuals from the sample that are M , Carnap shows that estimation functions can be characterized by the equation (where w is the logical width of M and κ is 2π), (1952, 62) eλ(rf,M,K, eM ) = sM + (w/κ)λ s+ λ . (4.7) Because we are assuming that we already know k, the actual value of rf, i.e., r, is fixed, Carnap explains that we can then explicate the notion of the "measure of success" of eλ in terms of its mean squared error, which Carnap shows us can be expressed with the following equation: m2(eλ,M, k, s) = s * r(1− r) + (w/κ− r)2 * λ2 (s+ λ)2 . (4.8) Relative to the state description k and our sample of size s with sM many individuals which hold of M in LπN , while varying the value of λ we can try to find the smallest value of m 2(eλ,M, k, s). In other words, we will find that value of λ which minimizes the mean squared error of eλ in the state description k. Importantly for Carnap, this procedure generalizes not only for any 215 Indeed, Carnap later opines that "Many contemporary statisticians seem to regard unbiased estimates as preferable, [...]. As far as I am aware, no rational reasons for this preference have been offered" (73). Indeed, similar remarks can be found in Howson and Urbach, 2006, 164-6; also Hastie et al., 2010, 52 and chapter 7; and within the theory of statistical inference – in part due to a theoretical trade-off between the bias and variance of an estimator where multiple parameters are being modeled – unbiased estimators are not always the most satisfactory estimators, e.g. see Efron (1975). 122 4.5. Finding Optimal Values of λ predicate M , but also for the most basic predicates in L, i.e., the Q-properties 'Q1', ..., 'Qκ'. The basic idea is that, for a fixed state description k, one can count the number of times each Q-property uniquely holds of the N individuals in LπN ; following Carnap, let us denote these numbers as Ni. Thus for each Qi, the actual frequency of Qi's in k is ri = Ni/N (note that∑ ri equals one). After showing that the values of eλ(rf, Qi,K, eQ) are given by the quantity (si+λ/κ)/(s+λ), Carnap calculates the mean squared error of these Q-based estimation functions relative to λ as, m2Q(eλ, k, s) = s− λ2/κ+ (λ2 − s) ∑ r2i κ(s+ λ)2 . (4.9) At this point in Carnap's investigation, the term ∑ r2i plays a very important role; specifically, it is the only term in equation (4.9) which tells us anything about the logical properties of the state description k. More specifically, Carnap tells us that we can use this quantity to help explicate the vague notion of the "uniformity of the universe" as a concept of "degree of order".216 If one of the ri, say rj equals one, that means for any l = j, rl is equal to zero; the "universe" according to the state description k is maximally homogeneous in the sense that all individuals in k are Qj . In this case, ∑ ri is equal to one. However, if the Q-properties in k are evenly distributed (i.e., such that N1 = N1 = * * * = Nκ), then the "universe" is maximally heterogeneous. In this case, given that N is large enough, Carnap shows that ∑ ri is approximately 1/κ. Assuming that 1/κ < ∑ r2i < 1, Carnap then calculates the change of m 2 Q as λ varies by taking the partial derivative of equation (4.9) and then shows, when the partial derivative is equal to zero, that the only root which satisfies the above condition is λ = (1− ∑ r2i )/( ∑ r2i − 1/κ); call this root-lambda λ∆.217 Using a bit of algebra by replacing λ with λ∆ in equation (4.9), it is then easy to see that when m2Q is at its minimum value it is equal to (1− ∑ r2i )( ∑ r2i − 1/κ) κ[(1− ∑ r2i ) + s * ( ∑ r2i − 1/κ)] . (4.10) 216 This notion of a degree of order crops up in Carnap's work on information and entropy, e.g. in Bar-Hillel and Carnap (1953); Carnap (1977); Carnap and Bar-Hillel (1952). There are a few manuscripts in the Pittsburgh archives which discuss this concept, especially a manuscript from May 1952 called "The Concept of Degree of Order" (RC 086-07-01). There Carnap defines the explicatum of a degree of order differently from above, viz., as o∗(Z) =Df ∏ I Ni!/N !, where Z is a state description and the degree of disorder, d∗, as 1/o∗. Interestingly it is there that Carnap discusses how one could define a degree of order for ordered individuals (either linearly, cyclically, or as a time-series) in L. Interestingly, part of the motivation to work on a concept of degree of order is to explicate a semantic concept of randomness (see section IV of the manuscript); see more on randomness see Church (1940) and Chaitin (1990). 217 Carnap treats the cases when ∑ r2i is equal to 1 and 1/κ separately (69). 123 4.5. Finding Optimal Values of λ Carnap calls the value λ∆ the optimal λ-value; that is eλ∆ , or just e∆, is the optimal estimation function based on the optimal inductive method for the state description k (69). As it turns out, it is not too difficult to find state descriptions which have optimal lambda values λ∆ such that m2Q(e ∆) < m2Q(e0). 218 Consequently, in light of such optimal values of λ, Carnap says that "widespread preference for the method of the straight rule e0, in the form of either the principle of maximum likelihood or the principle of unbiased estimation, is not justified" (75). Most interestingly, Carnap then goes on suggest that he can extend this result even further. Although it is unlikely that some person X will even be in a place to estimate the quantity λ∆ for the actual universe, perhaps we could find, as Carnap puts it, a "lower bound" for X's best estimate of λ∆ for the actual universe (76). Specifically, assuming both that we don't know the actual value of ∑ r2i for the unknown actual state description kT but that the universe is not totally homogeneous (i.e. ∑ r2i = 1), Carnap finds a lambda value λ′ (and the related estimate function e′) for which the following inequalities hold (theorem 25-7; §24):219 m2Q(e ∆ T , kT, s) ≤ m2Q(e′, kT, s) < m2Q(e0, kT, s). (4.11) Consequently, Carnap tells us that the significance of this result is the following: we have found that in any universe which contains at least two unlike individuals there is a specifiable estimate-function e′ with λ′ > 0 such that, with deductive certainty, the mean square error of e′ for all Q 's through the whole universe is less than that of the straight rule e0 with λ = 0. e0 is unbiased, e′ is not. It seems to me that this result shows a very serious disadvantage of the principle of preferring unbiased estimate-functions and of the straight rule. (79) Although it mattered to Carnap, for the moment our concern is not with whether Carnap's result is important to contemporary statisticians. Instead, what matters to us is how the above "neutral" investigation relates to Carnap's practical/theoretical distinction. In summary, relative to some object language L and some particular state of the world described by the state description k, and provided that the conditions C1-11 hold, Carnap suggests we can 218 In particular, Carnap considers a state description such that M is the only primitive property so that κ = 2 (i.e., the two Q-properties are characterized by M and ¬M); r has the value 0.3; and the observed sample of size s is equal to 10. Carnap then calculates the value of m2Q(e0) = 0.0210 and then, after showing that e∆ has a bias of 0.0689, that m2Q(e∆) = 0.01236. Thus m2Q(e∆) = 0.01236 < 0.0210 = m2Q(e0) (73-5). 219 The details of how Carnap constructs λ′ are not critical to our current discussion, but basically X is to choose that Q-property Qm for which the number of Qm's in the current sample of size s is greatest and then, on the basis of eQ, construct a state description k′ such that the Ns remaining individuals are all Qm (but making sure k′ is not homogeneous, e.g., that rm = 1) (76). Then λ′ is the optimal value of λ for k′ and, as it turns out, it is a theorem that λ∆T ≥ λ′ > 0 (76-7). 124 4.5. Finding Optimal Values of λ carry out a "neutral" investigation of the success of inductive methods as the value of λ varies without presupposing that any particular inductive method will be used to define this measure of success. Now all this may seem like Carnap is reneging on a strict split between the practical and theoretical. As each choice of a different value of λ, so to speak, fixes which inductive framework is adopted, then isn't the question of whether or not some value of λ is optimal is a theoretical rather than a practical question about which inductive framework ought to be adopted? The answer, I think, is no. The reason why is that this investigation is purely logical: one first needs to make practical decisions, decisions perhaps informed by empirical facts, about how to construct L, a partial or "skeleton" interpretation for L and the λ-system defined relative to L. Only then, after these practical decisions have been made, is it possible to formulate a theoretical question about which values of λ fix a cλ which minimizes the mean squared error of cλ relative to a fixed state description k. The same is true for the seemingly empirical investigation for calculating an optimal value of λ for the "actual" world: we only require, for some unknown "actual" state description kT that it is not homogeneous and as a mathematical consequence we find the result expressed using equation (4.11). Indeed, Carnap's "neutral" investigation is an example of what I mean by a fine-grained practical choice of an inductive framework. Loosely speaking, we make the practical decision to "fix," so to speak, a semantical system which includes the λ-system and then we can ask theoretical questions about which values of λ are preferable relative to the logical properties of this inductive framework itself. These theoretical results, then, can then be used to inform the practical decision to adopt a particular value of λ for other inductive problems, or, instead, to replace the current object language with a more or less complex object language. In other words, Carnap's "neutral" investigation does not simply replace all theoretical with practical questions. Instead, relative to which practical choices are made first, this investigation allows us to reformulate the single, coarse-grained decision about whether to adopt an inductive framework into two related practical questions: first, how to formulate an object language and a partial interpretation for that language and then, second, how to choose a value of λ based on a theoretical question about possible optimal values of λ formulated in terms of the antecedently chosen object language. 125 4.6. Conclusion 4.6 Conclusion That coarse-grained practical decisions can be decomposed into separate, but inter-connected fine-grained decisions is exemplified best, I think, by Carnap's inductive logic understood as an explication of inductive reasoning. Carnap once remarked that once an explicatum has been made exact – that is, presumably an explicatum like cλ for some adequate value of λ – then it can be "introduce[d] [...] into a well-connected system of scientific concepts" (LFP 7). As I have shown in this chapter, as Carnap understood the situation, the logical concept of probability, as explicandum, belongs to a system of related inductive concepts, like relevance or estimation. So when a coarse-grained decision is made to adopt a particular explication of this logical concept of probability (i.e., a choice of cλ) as an adequate explication, one also adopts a whole range of explicata for those inductive concepts conceptually related to the concept of logical probability. We saw this above for the case of estimation functions: once an adequate confirmation function c is adopted, we can define an adequate e-function as that estimation function based on c. The same is true for concepts like relevance, degree of order, information and entropy. It is exactly here that it is crucial to recognize the revealed difficulties of the task Carnap sets himself when he says his work in inductive logic is meant to provide an explication of inductive reasoning. Logical probability, according to Carnap, is the concept which forms the basis for all inductive reasoning and the closer we get to providing an adequate explication of this concept based on a single coarse-grained decision, the more this explication may have to be constantly modified and altered so as to make adequate any number of explications for other inductive concepts based on the explication of logical probability. Now isn't the place to discuss the technical details of how Carnap exactly explicates these concepts in terms of an adequate c-function.220 However, in figure (4.6), I schematize how the interrelations between these concepts, as explicata, are interrelated (the arrow "→" between explicated concepts, or nodes, A and B denotes a partial definitional dependency between B and A, e.g. like B being "based on" A). Each of these concepts may be more or less important in different scientific disciplines rel220 Indeed, I'm doubtful whether a detailed discussion of how Carnap constructs these concepts will help explain Carnap's practical/theoretical distinction in any more detail than our discussion of Carnap's work on estimation functions. 126 4.6. Conclusion .S / R (State Descriptions and Range for L) m(h) (Measure Function) c(h, e) (Confirmation Function) e (Estimation Function) r (Relevance Function) O(k) (Degree of Order) D(k) (Degree of Disorder) mP (h) ("Proper" Measure Function) cont(h) (Content Measure) inf(h) (Measure of Information) cP (h, e) ("Proper" Conf. Fun.) cont(h/e) (Cond. Content Measure) inf(h/e) (Cond. Measure of Information) ⟨N,n, μ-space⟩ (Gas system represented as a Classification system; ie., a n-dimensional phase space over N elements) Dprec / Dind / Dst (Descriptions of a Classification System of K cells) Environments (K cells of equal volume vμ and range Rμ) pprec (All possible Dprec) S′B(p prec) (Boltzmann-based Entropy) S∗∗(pprec) (Abstract Concept of Entropy) D∗∗ / O∗∗ (Degree of (Dis)Order) m∗∗ (Measure Function) c∗∗ (Confirmation Function) O∗(Dind) (Degree of Order) m∗(Dind) (Measure Function) c∗(h, e) (Confirmation Function) D∗(Dst) (Degree of Disorder) S(Dsti ) (Concept of Entropy (approx.)) S∗(Dsti ) (Concept of Entropy (special case)) For rel. freq. rf For stat e de scrip tion k w.r.t. same mP w.r.t. same mP (Mathematical Generalization) Figure 4.1: A "Well-connected" System of Inductive Concepts. This diagram only offers a schematic representation of some but by no means all the mathematical dependencies between Carnap's various technical projects in the early 1950s. The dotted lines represent how different confirmation functions belonging to different scientific disciplines may be related and suggest how Carnap could possibly understand why a choice of a single confirmation function could impinge on the meaning of logical probability across disciplines. "Well-connected" is in scare-quotes because, of course, Carnap stopped working on issues of information and entropy and so this system of inter-related explicata was never completed. For what the technical terms in the lower half of the diagram mean, e.g., "environment," "classification system" and "descriptions," consult Shimony's introduction to Carnap (1977) and Köhler (2001). A "proper" measure function is roughly the same as a measure function which is both symmetric and regular. Moreover, I do not graphically represent any of the mathematical connections between Carnap's explications of entropy, S∗∗ or S∗, and information, inf(h/e). 127 4.6. Conclusion ative to the other concepts; however, their common link, as Carnap argues, is the concept of logical probability. Nevertheless, despite the original reasons one may have had for preferring one adequate c-function over others, once this function is used to explicate other related inductive concepts these newly explicated concepts may not themselves be adequate relative to the ends of different scientific communities. This seems to be a much more robust account of Wissenschaftslogik than the one Carnap articulated in LSL. For example, to introduce a new piece of terminology, we could say that although the chosen c-function is locally adequate relative, perhaps, to philosophers and mathematicians, it is not adequate relative to a larger community of scientists, e.g., statisticians.221 That this is the case would then be reason enough to search for a more adequate c-function. That the decision to adopt a given c-function can be a coarseor fine-grained decision and that the adequacy of any given c-function can be a more or less locally adequate captures, I argue, the kind of role for the practical/theoretical we saw in the design hierarchies exhibited in the activities of engineering design and production from chapter 3. Indeed, that Carnapian explication is a kind of conceptual engineering is best captured from this perspective of the difficulties encountered when a single confirmation function is used to explicate an entire conceptual network of interrelated inductive concepts; the search for an adequate inductive logic is one of satisficing not finding the "correct" inductive logic. The resulting picture of Carnap's inductive logic as a kind of conceptual engineering is at odds with suggesting that Carnap is trying to provide an epistemology – let alone a robot epistemology – stating how one ought formulate their beliefs. Nevertheless, there is reason to think something is amiss in all this talk of practical decisions and conceptual systems: where is the talk about justification? I mean, shouldn't Carnap be worried about how we should understand the meaning of probabilities or which inductive methods we ought to use? That Carnap, for the most part, eschews the vocabulary of traditional epistemology is the topic for the next chapter. Indeed, situated against the backdrop of Reichenbach and Feigl's pragmatic justification, or vindication of induction and the new developments in normative decision theory, we will find that Carnap, in 1957, rejects the idea that we need a non-circular, or foundationalist, justification of induction at all. Of course, once we adopt an inductive 221 There is a certain similarity here with what has been called local and global induction (e.g. see the articles in Bogdan, 1976); however, there these senses of "local" and "global" are relative to the kind of skepticism one is willing to countenance. 128 4.6. Conclusion standard, there is a perfectly good sense in which we can say which inductive methods one should or shouldn't adopt, i.e., we may propose the rule to adopt those inductive methods consistent with that standard. But whether it be for ideal rational agents, or robot scientists, the decision to adopt one inductive standard over another is always, for Carnap, a practical decision. But because of this, inductive logic – just like for any other piece of logic put to applied use – is neither entirely normative nor descriptive. Just like any other piece of logic, inductive logic is, for Carnap, conceptual technology.222 222 This point has been emphasized to me by Alan Richardson on numerous occasions; as Alan once asked me in personal communication: "Does a thermometer describe the intuitions of warmth and cold that people have or is it to tell us what intuitions we ought to have on such matters? I suspect neither. Technology is neither descriptive nor normative. For Carnap, logic is technology." 129 Chapter 5 Constructing Rational Decision Theory When no guarantee of success, or of success in a certain number of cases, can be given for induction – and this should now be sufficiently clear – the best we can do is to prove that induction is the best we can do. If some of my critics do not regard the best as good enough, I recommend that they move into a different universe, leaving this one to those who wish to live as best they can. - Hans Reichenbach, "Reply to Donald C. Williams' criticism of the frequency theory of probability," (1945) The characterization of logic in terms of correct or rational or justified belief is just as right but no more enlightening than to say that mineralogy tells us how to think correctly about minerals. [. . . ] The activity in any field of knowledge involves, of course, thinking. But this does not mean that thinking belongs to the subject matter of all fields. It belongs to the subject matter of psychology but not to that of logic any more than to that of mineralogy. - Rudolf Carnap, The Logical Foundations of Probability, (1950) In the last chapter we saw both how Carnap treats inductive logic as an instrument and how he thought he could, through any number of what I call fineor coarse-grained practical decisions, modify or expand these inductive technologies so that they can be used to clarify foundational problems in theoretical statistics and even theoretical physics or the information sciences. I then asserted that these decisions resemble the kind of hierarchical engineering design problems we encountered in chapter 3. Even the task of finding an adequate inductive logic, e.g., by finding an optimal value of λ, was revealed to have no straightforward solution: "life is a process of never ending adjustment," said Carnap, "there are no absolutes, neither absolutely certain knowledge about the world nor absolutely perfect methods of working in the world." The finding of an adequate inductive logic resembles, I argued, a process similar to Simon's notion of satisficing. In this chapter we talk about what it means for an explication project to be successful ; namely, how a pure inductive logic can be successfully applied to the empirical sciences. I focus on a particular historical episode from the history of philosophy of science. For the 130 Chapter 5. Constructing Rational Decision Theory logical empiricist Hans Reichenbach, probability becomes, in his 1938 Experience and Prediction, the central concept of epistemology and the foundation for all of scientific knowledge – but probabilities for Reichenbach must have a meaning based on the relative frequencies observed in the world; how else could probability possibly be a guide in life? Carnap, by contrast, is a pluralist about probability: both frequentist and logical meanings of probabilities can be fruitful for the empirical sciences. I explain how, in response to his peers like Reichenbach and Herbert Feigl, Carnap explains how inductive logic can be applied to normative or empirical decision theory, theories founded in part by Frank P. Ramsey and, in the 1950s, by the statistician L. J. Savage.223 However, Carnap argues that the success, or adequacy, of such an applied inductive logic need not be defined in terms of empirical success. The result, I suggest, is what marks off professional engineering, which is concerned with making something happen in the world, from conceptual engineering. For Carnap, the adequacy of an inductive logic need not be measured by its "fit" with the world, but instead measured by its "fit" with a hypothetical world, like the "universe" according to the state-descriptions in some object like L. Specifically, normative decision theory, for Carnap, becomes a kind of "conceptual space" in which he can freely experiment and design new inductive logics which are sensitive to the science of decision making under uncertainty. In this chapter I discuss a number of historical developments, including: the work by Ramsey, Feigl and Reichenbach on probabilities and the justification of induction; the foundations of statistics and behavioral economics with reference to the work of John von Neumann, Abraham Wald and L. J. Savage; Carnap's attempt in the late 1950s to explain his views to his peers, including Carl Hempel; and, finally, the culmination of Carnap's mature views on the application of inductive logic to decision theory in his 1962 "The Aim of Inductive Logic." These episodes fit together, historically and conceptually, to tell a narrative about how Carnap finally comes to understand the relationship between inductive logic, normative decision theory and empirical decision theory. All of these historical elements are required, I argue, to explain why Carnap thinks a logical concept of probability can be used to guide our decisions in life. 223 As we will see in later sections, "empirical," "descriptive," "rational" and "normative" decision theory are the terms the scientists and statisticians, like L. J. Savage, use and coin themselves. There is already much work done in the history of economics tracking different notions of "rationalizing" economic actors staring with John Stuart Mill and continued on by John von Neumann, Oskar Morgenstern and Milton Friedman, see Heukelom (2014, chapter 1). 131 5.1. Carnap on Hume's Problem of Induction 5.1 Carnap on Hume's Problem of Induction In late 1950s, the philosopher of science (and student of Reichenbach), Wesley C. Salmon, suggested that the then current philosophical work on probability and induction could be split into two camps: "Anti-Warrantists" who claimed it "impossible or unnecessary" to justify induction and "Warrantists" who attempted to salvage some sense in which induction could be justified.224 Hume's problem of induction looms large here: Why should the future be like the past, or rather, what licenses a scientific agent to make inferences about the future based only on evidence about the past? The received view of Hume's response, at least according to philosophers of science working on induction and probability in the 1950s, is largely negative: there can be no justification for the induction that the future will be exactly like the past as it is impossible to know with certainty whether there is any metaphysical necessity linking past and future events. Rather, inductive behavior is best explained as a matter of psychology: we tend to form inductive habits connecting our memories about the past with our expectations about the future. "Warrantists," then, seek a stronger sense of justification than mere psychological habituation. The logical empiricists Herbert Feigl and Hans Reichenbach, in particular, in the 1930s advocated for a "pragmatic" justification of induction (see section 5.4 below). The crucial interpretive point, to use Salmon's terminology, is that Carnap cannot be neatly assimilated as either a "Warrantist" or "Anti-Warrantist." For although Carnap does think there is a sense in which the inductive practices of scientists can be "justified" using an "applied" inductive logic, he nevertheless rejects the idea that "rules of acceptance" or "rules of detachment" are required for inductive logic to be a guide in life. Applied inductive logic, for Carnap, need not be prescriptive in the sense familiar to contemporary formal epistemologists, i.e., that inductive logic should tell us what a rational agent ought to believe or what actions ought to be performed. Indeed, even when Carnap openly interprets inductive logic subjectively, he never understands himself to be embracing normative epistemology (see section 5.5). Even in its application to rational and empirical decision theory, inductive logic remains an instrument. Carnap closes his 1962 "The Aim of Induction Logic" with a remark on a dilemma engendered by "the feeling," shared by scientists and everyday folk alike, "that [inductive reasoning] is valid 224 Salmon (1957). See Kyburg (1964) for a detailed review of the philosophical "problem-space" for probability and induction in the 1950s and early 1960s, including who gets labeled as a "Warrantist" or "Anti-Warrantist." 132 5.1. Carnap on Hume's Problem of Induction and indispensable" despite the fact that Hume's original problem of induction remains without solution. But so "[w]ho is right," asks Carnap, the man of common sense or the critical philosopher? We see that, as so often, both are partially correct. But still the basic idea of common sense thinking is vindicated: induction, if properly reformulated, can be shown to be valid by rational criteria. (1962, 318) What does it mean to "properly reformulate" induction? In the paper, Carnap shows how it is possible to move from empirical decision theory to rational, or normative, decision theory, and finally from normative decision theory to inductive logic. In the first transition, we move from empirical (or better: psychological) concepts to the concepts from rational decision theory, or what Carnap calls "quasi-psychological" concepts, like 'degree of credence' defined for idealized, rational agents on the basis of what Carnap calls "requirements of rationality." In the second transition, these requirements of rationality can be used to place constraints on how to interpret and construct a pure inductive logic. Thus, with the conceptual transitions between empirical decision theory, normative decision theory and inductive logic in place, if a person X makes the practical decision to assign a probability value, say 0.65, as their estimate concerning a hypothesis H concerning some physical magnitude based on their total observational evidence E, they can then construct empirical or "rational" credence concepts from an inductive logic containing the quantitative confirmation function c, where c(H,E) = 0.65. With this function c in hand, X can then systematize their own inductive habits or practices by appealing to the values c assigns to other empirical hypotheses, i.e., c provides X with the "rational criteria" required for assigning probability values to future events. It is in this sense that inductive reasoning is vindicated in the face of Hume's problem of induction: inductive logic can serve as a guide for decision making. Carnap's solution to Hume's problem of induction, as it is, probably sounds foreign to the ears of contemporary philosophers of science. Understanding why Carnap thinks the procedure in the above paragraph constitutes a "solution" to Hume's problem will require the resources discussed throughout this chapter. Before we move on, however, I want to make two observations. First, in his 1945 paper "On Inductive Logic," using the language of rational reconstructions, Carnap quickly distinguishes between two ways in which an inductive logic – understood as a rational reconstruction of a body of inductive practices or beliefs – can be justified. On the one hand, an inductive logic can be justified in the sense that it is valid, i.e., the inductive logic 133 5.1. Carnap on Hume's Problem of Induction correctly corresponds to the inductive beliefs it is based on. Such a solution would amount to solving "the genuinely philosophical problem of induction" (95-6).225,226 On the other hand, the inductive logic is justified in the weaker sense that it is a satisfactory reconstruction of those beliefs: Our system of inductive logic, that is, the theory of c∗ based on the definition of this concept, is intended as a rational reconstruction, restricted to a simple language form, of inductive thinking as customarily applied to everyday life and in science. Since the implicit rules of customary inductive thinking are rather vague, any rational reconstruction contains statements which are neither supported nor rejected by the ways of customary thinking. Therefore, a comparison is possible only on these points where the procedures of customary inductive thinking are precise enough. (1945a, 95) It is in this weaker sense of justification (if we want to call it justification at all) that Carnap, in the 1962 paper, provides a solution to Hume's problem of induction by showing that one can systematize the procedures of the customary ways of inductive thinking with an inductive logic.227 The second observation I want to make is that Carnap had what some philosophers of science in the 1950s may have considered a non-standard interpretation of Hume. In the 1960s, Richard Jeffrey attempts to diagnose an earlier miscommunication between Carnap and Nelson Goodman in the 1940s concerning both the relevance of Carnap's principle of total evidence and an earlier version of Goodman's "new" riddle of induction (see Goodman 1946). According to Jeffrey, Goodman has a strict reading of Hume, i.e., "in induction we select some property of observed objects and attribute it to unobserved objects in such way that the hypothesis must say about the future all and only what the evidence says about the past" (emphasis in original; Jeffrey, 1966, 283–4). For Goodman, Hume's problem is the problem of predicting exactly what properties will and won't hold of each event we observe in the future on the basis of what has happened in the past. By contrast, Jeffrey suggests that Carnap has a slightly different understanding of Hume's problem; it is a problem that can be solved once we simply 225 It is clear from the text that when Carnap uses "valid" in the "The Aim of Inductive Logic" quote above, he means something like logical validity. However, in the 1945 quote, by "validity" Carnap instead means factual truth, or as a theory holding of a set of natural phenomenon. For a similar usage of the second sense, see Carnap, 1939, 206. 226 This is similar to what Reichenbach calls the "critical" task of epistemology, see the second part of section 5.3 below. 227 Aside from the odd phrase here or there and both the appendix to LFP and a footnote in Carnap (1947b), Carnap rarely uses the language of justification and when he does do so, it is not in reference to a robust, or normative, account of justification but more along the lines of finding an adequate interpretation for an axiomatic system. 134 5.2. Ramsey's Decision Theory as Qualified Psychologism identif[y] the credibility of a sentence h for a particular person at a particular time with the number c(h, e), where e records everything the person knows at that time in the way of factual information expressible in the language on which c is defined. No harm is done if e omits information that is irrelevant to h [. . . ]. ( Jeffrey, 1966, 284) For Carnap, the point is that the function c lets us characterize the person's inductive habits as a credence function (see section 5.5 below) and this function allows the person to use this function as their "rational criterion," i.e., as a guide, for inductive decision making. Regarding Goodman's reading of Hume, Carnap tells Jeffrey in personal correspondence that after reading Jeffrey's paper he has "understood Goodman's basic idea for the first time in my life" and now at least everything is clear; [Goodman] always took Hume's terms like "repetition" or "we expect to see in the future what we have observe in the past" in the literal, narrowest way possible; a thing that is so far from my thinking in inductive logic – as you make now clear – that it never occurred to me that anybody might mean it in this way. (By the way, did Hume mean it in this specific sense?) (Carnap to Jeffrey, April 27, 1966, RC 083-01-16) Carnap's solution to Hume's problem of induction, then, is not a solution to the problem of why the future must resemble the past. Instead, it is a solution to the problem of how one can construct an agent's "credence" or "credibility" function, e.g., from either empirical or normative decision theory, respectively, which may then be used as a "rational criterion" or a guide for that agent. So how does Carnap understand the relationship between inductive logic and its applications for decision theory? To answer that question, we need to turn to Carnap's idea that Ramsey's decision theory provides us with a way to transition from empirical to rational, or normative, decision theory. 5.2 Ramsey's Decision Theory as Qualified Psychologism In order to explain how Carnap understands the importance of Ramsey's 1926 paper "Truth and Probability" for the relationship between inductive logic and decision theory we first need to discuss Carnap's views on psychologism – views which he adopts from Frege.228 In sections 11 and 12 of LFP, Carnap explicitly credits Frege, along with Husserl, for disentangling a certain "discrepancy," called "psychologism," made by those logicians who, on the one hand, treat logic as dealing with objective relations (in the sense that an objective logical relation, like 228 Indeed, in a letter to Abner Shimony, Carnap remarks that he remains "strongly influenced" by Frege's criticisms of psychologism, August 21, 1969, RC 084-55-02. Moreover, in 1955, Carnap cites section 12 of LFP and says that he "emphatically" rejects the "subjective probability conception of probability" in a letter to Jerzy Neyman; August 19, 1955; RC 084-44-02. 135 5.2. Ramsey's Decision Theory as Qualified Psychologism all physical relations, "is complete without any reference to the properties or the behavior of any person") but nevertheless who, on the other hand, go on to characterize logic as being concerned with the subjective nature of human beings (e.g., when logic is "characterized as the art of thinking, and the principles of logic are called principles or laws of thought") (38-40).229 Carnap differentiates between two kinds of psychologism. The first is what Carnap calls "primitive" psychologism according to which the logician claims that their logical system actually somehow does bear on actual psychological processes.230 For the second kind, the logician denies that their logical system bears on "the actual processes of believing, thinking, inference" but yet they "still [cling] to the belief that there must somehow be a close relation between logic and thinking, they say that logic is concerned with correct or rational thinking" (41). Carnap calls this qualified psychologism: even though an author may talk about "rational degrees of belief" and the like, these psychological concepts are only defined for idealized, rational agents – but here the references to hypothetical agency can be dropped as "inessential relics from a traditional way of speech" thus transforming these concepts into purely logical concepts (42). This notion of qualified psychologism is crucial for understanding Carnap's own contributions to decision theory.231 As Carnap recounts the history of decision theory in LFP, although Ramsey initially framed his original logic of partial belief from 1926 as a psychological project, in light of a note from 1929, Ramsey later understood his logic of partial belief as a kind of qualified psychologism (LFP 46-7).232,233 Indeed, Carnap later writes to L. J. Savage, 229 In particular, Carnap goes on to notice that, so understood, psychologism is a special case of a more general "discrepancy" common in the history of science between "what an author actually does and what he says he does" (37). 230 Circa 1950, Carnap presumably considers such cases as very rare: the only example Carnap gives of genuine psychologism for probabilities is by a physicist James Jeans, re interpretations of quantum mechanics (LFP 47). 231 Isaac Levi, however, provides an alternative reading of Carnap's qualified psychologism. According to Levi the upshot of sections 11 and 12 in LFP is that Carnap, like Frege, is interested in articulating truly normative principles of deliberation, see Levi, 1980, 424–30; 1967. I direct those interested in understanding the roots of Levi's disagreement with Carnap – specifically, that Carnap has a "guidance counselor" view of science (RC 084-40-11, p. 8) – to a series of letters between the two from the spring of 1963 to the early months of 1964, RC box 84, folder 40, documents 03 through 10; Levi's manuscript is titled "Carnap on Logic," RC 084-40-11, dated May 23, 1963. 232 The influential sentence from Ramsey's 1929 note is this: "The defect of my paper on probability was that it took partial belief as a psychological phenomenon to be defined and measured by a psychologist" (Mellor, 1990, 95). 233 I'm not quite sure when exactly Carnap first read Ramsey's paper. However, as Carnap reports to Hempel during a conversion regarding when Carnap first learned of "Ramsey sentences," most likely Carnap first read Ramsey's 1931 book (posthumously edited by R. B. Braithwraite), which contains the 1926 paper, while he was in Vienna or Prague (Carnap to Hempel; February 12, 1958, CH 11-03-39). 136 5.2. Ramsey's Decision Theory as Qualified Psychologism [. . . ] Ramsey's note might be understood as the transition from empirical decision theory to rational decision theory and finally to logic. His formulations are brief and indeed not very clear. I managed to understand them only after I had myself seen the necessity of a transition of this kind. I came to the connection between these fields in my thinking a temporally reverse order. I began with the logical concepts, and only later I recognized that it is necessary – or at least practically advisable – for giving the reasons for the choice of the logical concepts to go over to rational decision theory, which in turn is best understood by approaching it by the way through empirical decision theory. I believe that in this development of my conception Ramsey's ideas had some influence, although at that time I was not entirely aware of this influence. (Carnap to Savage, April 20, 1961, RC 084-52-13; emphasis mine) The relevance of Ramsey's work – which we will continuously return to throughout this chapter – will become apparent as we explain Ramsey's work in decision theory. In the foreword to Ramsey's paper on probability, aside from acknowledging the importance of a frequentist concept of probability for the empirical sciences, Ramsey says he is interested in the theory of probability as "a branch of logic, the logic of partial belief and inconclusive argument" (1926, 53).234 What is relevant is that not only does Ramsey consider his own logic of partial of belief as an empirical theory but that he explicitly rejects John Maynard Keynes' own logic of partial belief based on the existence of probability relations (see Keynes 1921). Ramsey understands Keynes's theory as resting on two assumptions. First, probabilistic inferences, i.e., inferences from the full belief in a single proposition to a partial belief in another proposition, are objectively valid in the sense that "if another man in similar circumstances entertained a different degree of belief, he would be wrong in doing so" (56). Second, that for any pair of propositions, "there holds one and only one relation of a certain sort called probability relations; and that if in any given case, the relation is that of degree a, from full belief in the premises, we should, if we were rational, proceed to a belief of degree a in the conclusion" (56). More explicitly, according to Ramsey, for Keynes it is our perceptual knowledge of probability relations (which we are aware via something like introspection) along with the assumption that one has direct knowledge of the premises in an argument that there is a tight justificatory relationship between (i) the probability relations defined over propositional premise/conclusion pairs and (ii) one's degrees of partial belief in the conclusions. Ramsey rejects Keynes's theory by way of dropping Keynes's second assumption. Instead of assuming that probability relations exist, Ramsey instead begins with the empirical assumption that, whatever partial beliefs are, they are measurable and that given this assumption, the 234 All citations to Ramsey's papers use the pagination from Mellor (1990). 137 5.2. Ramsey's Decision Theory as Qualified Psychologism aim is "to develop a purely psychological method of measuring belief" (62). Indeed, analogous to Einstein's conventional measurement of light, Ramsey invokes a number of conventions for measuring partial beliefs and makes the following idealizations, or what he calls "fictions": first, that beliefs are to be assigned certain magnitudes – as degrees of belief – ordered on a scale between '1' and '0' and, second, that not only can beliefs be compared but that degrees of belief are additive (64-5). Lastly, Ramsey makes the empirical assumption that degrees of belief are causal properties of beliefs "which we can express vaguely as the extent to which we are prepared to act" on those beliefs (65). Taken together, Ramsey tells us, these measuring conventions, "fictions" and the causality assumption allow for the "measurement of belief qua basis of action" (67). Again with an analogy to physics, namely, the concept of electric charge of a particle in an electromagnetic field, Ramsey explains that degrees of belief are to be understood as dispositions measuring the response of changes of belief to hypothetical actions, i.e., beliefs "which I rarely think of, but which would guide my action in any case to which it was relevant" (68). Ramsey than makes the following psychological assumption, despite claiming it is, like Newtonian mechanics, literally false, viz. "we act in the way we think most likely to realize the objects of our desires, so that a persons actions are completely determined by his desires and opinions" (my emphasis, 69). With this general empirical framework in the background, Ramsey makes two more assumptions. The first assumption is that the opinions of a person concerning their desires and aversions about all propositions can be captured by a numerically measurable and additive relation (69-70). The second assumption is the "law of psychology" – that agents maximize the amount of 'good' for all possible consequences using mathematical expectation (70-72). Finally, by introducing the concept of "ethically neutral" propositions, or propositions about which an agent is indifferent to their truth relative to two alternative possibilities, which are always assigned a degree of belief '1/2', Ramsey outlines what we would now call a "representation theorem" for a number of axioms provided by Ramsey (which are not reproduced here): provided an empirical agent satisfying the above theoretical considerations can make an infinite number of hypothetical, conditional bets on indifferent (but not necessary ethically neutral) propositions, the agent's (conditional) degrees of belief are measured as their odds (73-76).235 235 If two persons 1 and 2 agree to bet on the proposition H the sums A and B, then A + B is the stakes, 138 5.2. Ramsey's Decision Theory as Qualified Psychologism Finally, although Ramsey provides no proof for the result, he then claims that insofar as a person's system of partial beliefs is "consistent" their degrees of belief must obey the probability axioms: if their degrees of belief do not obey the axioms then they would find themselves in violation of the "law of preferences between opinions" and thusly such a person "could have a book made against him by a cunning bettor and would then stand to lose in any event" (78). The laws of probability, argues Ramsey, are also laws of consistency (78). All of this is, for the most part, the received view of Ramsey's decision theory. Not oft remarked, however, is that Ramsey then goes on to discuss a connection between his empirical logic of partial belief with a pragmatic notion of choosing "reasonable" opinions based on those prior options which lead to inductive success. Framed within the context of C. S. Peirce's distinction between a logic which is "explicative, analytic, or deductive" versus "implicative, synthetic, or (loosely speaking) inductive logic" (82), the question for Ramsey is in which category his own logic of partial belief belongs. Moving at lightning speed through his paper, Ramsey, following C. S. Peirce, draws on a certain symmetry between a subjective and objective interpretation of his logic of consistent partial belief to suggest that "reasonable" opinions are those opinions which lead to successful inductive habits: This is a form of pragmatism: we judge mental habits by whether they work, i.e. whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to. [. . . ] All philosophy can do is to analyze it, determine the degree of utility, and find on what circumstances of nature this depends. An indispensable means for investigating these problems is induction itself, without which we should be helpless. In this circle lies nothing vicious. (93-4) It is important to notice that Ramsey's pragmatism blends together exactly those subjective and objective interpretations of a logic which Carnap diagnoses as psychologism. The crucial shift in Carnap's own thinking when reading Ramsey in the early 1940s stems from a single sentence made by Ramsey in a note in 1929: It was a mistake, Ramsey claimed, to suggest that his logic of partial belief is an empirical theory. Now Carnap can interpret Ramsey's logic of partial belief as a theory for how to measure degrees of belief for idealized agents in hypothetical circumstances. Consequently, whether or not an inductive method is "reasonable" for an idealized agent is independent of how empirically successful that method is for actual P = B/(A+B) is 1's betting ratio for H and (1− P ) = A/(A+B) is 2's betting ratio for H and the odds on H is P/(1−P ) = B/A. 139 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically agents. Even though Ramsey's psychologized terms sound as if they refer to empirical entities, they can always be given a purely logical, objective interpretation.236 5.3 Feigl, Reichenbach and Justifying Induction Pragmatically The question of whether the notion of adequacy for inductive methods must be defined in pragmatical terms of empirical success was not an entirely alien idea to Carnap in the 1940s and 1950s as this idea was central to the work on probability and induction by Carnap's fellow logical empiricist travelers; namely, Herbert Feigl and Hans Reichenbach.237 It is a standard part of the received story of the development of logical empiricism that after engaging in the protocol sentence debate with Otto Neurath, Carnap, in the early 1930s, endorsed a liberalization of Wissenschaftslogik and along with it a rejection of the earlier verificationist theory of meaning. Specifically, Carnap, in Carnap (1936a;b; 1937a), replaces the principle of verification with a principle of testability based on a pragmatic conception of confirmation, or "Bewärhung."238 Moreover, Carnap, himself late to the probability game, could draw on the mature work on probability and induction by Reichenbach and Feigl which dates back to the 1910s and 1920s, respectively. However, both Reichenbach and Feigl's solutions to Hume's problem of induction have a Kantian flavor: in order to explain the inductive knowledge required to make decisions about how to act in the physical world, a certain inductive principle is a necessary condition for solving Hume's problem of induction. Moreover, not only did Reichenbach and Feigl both argue that only a frequentist interpretation of probability could be made to be consistent with any commitment to a principle of empiricism, but that, especially in Reichenbach's case, only a throughly probabilistic conception of knowledge could explain the explanatory and predictive success of science.239 When Carnap first started to work on inductive logic in the mid-1940s he 236 See pages 45-7 of LFP for Carnap's discussion of Ramsey's 1929 note. 237 Ideally, I would include a section (or a chapter even) on Popper's views on probability from the 1930s and his various disagreements (and misunderstandings) with Carnap in the 1950s and 1960s about the nature of probability and induction. Unfortunately, such additions would make the dissertation (which is already quite long) far too prolix. In future work I plan on discussing not only how Popper criticizes Carnap's work on inductive logic but also the criticisms of one of Popper's colleagues at the London School of Economics in the 1960s, viz. Imre Lakatos. 238 Where "pragmatic" here means that the concept of confirmation is defined relative to a particular person at a particular time; see the preface to the 1962 version of LFP. 239 This idea that logical probabilities can't be a guide to life still has much traction; see Salmon (1967; 1988) and Hájek (2012). 140 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically was forced to respond to Feigl and Reichenbach's pragmatic justification of induction. In this section, we are concerned with explaining this pragmatic justification of induction and Carnap's response to it. Feigl's Pragmatic Justification, or Vindication, of Induction Before we discuss Reichenbach's more nuanced views, it will be useful to quickly turn to Feigl's work for a more bare-bones statement of the pragmatic justification of induction. In 1930, Herbert Feigl asks how Waismann's new logical concept of probability – a concept defined within propositional logic – could possibly be applied to reality, "i.e., to the statistical states of affairs as they are given in experience" (1930, 108).240 For even with the most powerful of mathematical results, like the (strong) law of large numbers, Feigl warns that "we must beware of a crude fallacy [groben Denkfehler ] in reasoning that frequently occurs; it consists in trying to draw conclusions about the behavior of reality from purely mathematical deductions" (109). Whatever mathematical claim one makes about whether the limit of an observed relative frequency will converge or not, Feigl's point is that it is always an empirical claim whether that limit actually does, in fact, exist. However, typically there is no empirical guarantee that the limit exists at all. But if this is the case, asks Feigl, in what sense are the laws found in statistical mechanics or the then new science of quantum mechanics actually laws? Citing both Zilsel and Reichenbach (e.g. see Zilsel, 1916), Feigl makes the observation that from the standpoint of human cognition it seems like we do make successful inductions and that such inductions seem to require that the world is not completely lawless. Nevertheless, Feigl acknowledges that there is no straightforward justification for this parsimony assumption; instead, as with all inductions, "we are not dealing with a well-founded [begründbar ] procedure for reaching conclusions, but with a practical activity [ein praktisches Tun], with a decision" (114). Such inductions, says Feigl, "merely [express] our hope that nature will remain knowable in the future in the same way that it has been up to now" (114-115). In 1934 Feigl clarifies his earlier view by arguing that in order for induction to be a guide for our scientific behaviors and predictions, inductions need to be defined in terms of a frequentist meaning of probability. However, because we cannot attach probabilities to the principle of 240 All pagination when citing Feigl's publications is to his collected works, Cohen (1981). 141 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically induction itself without begging the question that some assignments of probabilities are, in fact, justified, Feigl claims that the principle of induction is not, to use a Kantian distinction, a categorical but a hypothetical generalization (1934, 157).241 Specifically, drawing an analogy with deductive logic, Feigl argues that just as with deductive logic for which one must assume that some principle is "self evident" in order to make logical claims, the principle of induction is not a bit of knowledge, it is neither analytic or synthetic, neither a priori nor a posteriori, it is not a proposition at all. It is, rather, the principle of a procedure, a regulative maxim, an operational rule. (158; emphasis in original) More specifically, this principle is a "guide" insofar as it "a consequence of the ultimate goal of science"; namely, the goal of maximizing the order, either causal or statistical, of our descriptions of the empirical world while simultaneously minimizing arbitrariness (159). We must presuppose the world is simple, or well-ordered in some respect, if induction is to be possible at all. Lastly, Feigl attempts, in Feigl (1950), to clarify his earlier views by drawing a distinction between two kinds of justification called validation and vindication. According to Feigl, a rule P is validated if the reasons given for accepting P are consistent with the principles, norms or rules constituting the actual reasoning (which may be implicit or explicit) currently in use. There is no pretense that such justifications are non-circular. Indeed, disagreement about whether a statement is validated is, ultimately, a disagreement about which basic norms or rules separate parties accept. Justification by vindication, on the other hand, is similar to the kind of pragmatic justification Feigl articulated in the 1930s. Simplifying Feigl's own exposition a bit, the vindication of some rule P comes in two steps. First, one must agree by stipulation that certain principles, norms or rules will be used to measure the validity of empirical claims and that the aim is to satisfy some goal, e.g., like making successful inductive prediction (261). Second, P is then vindicated if it can be shown that P will satisfy that goal relative to some set of standards S, i.e., for some end E and standards S, the rule P is vindicated (or, better: E/S-vindicated) if we can show that P will help get us E (even if P isn't sufficient for E ) given that S is true. For example, Feigl's inductive principle is vindicated in the sense that, assuming the world is not too complex or irregular (= S ), that principle will eventually lead us to make successful predictions (= E ). It is in this sense that Feigl's principle of induction is vindicated. 241 Also see Dubs and Feigl (1934). 142 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically Moreover, vindication succeeds once both (i) we are in agreement that some end must be reached (E ) and (ii) there is agreement on a means for obtaining that end (P given S ). Reichenbach's Probabilistic Epistemology Although both Ramsey's pragmatism and Feigl's pragmatic justification of induction are no doubt important for situating Carnap's own attempt to find room for a logical conception of probability in an empiricist-friendly vision for the foundations of science, Reichenbach's probabilistic epistemology in his 1938 Experience and Prediction – along with Reichenbach's notion of "weights," in particular those weights which assign a relative frequency to propositions describing events which have not yet happened as voluntarily "posits" or "wagers" – provided Carnap with the most complete, empiricist-friendly, probabilistic conception of scientific knowledge available in the 1940s. Indeed, in 1950, Carnap says of Reichenbach's view that: It seems to me that it would be more in accord with Reichenbach's own analysis if his concept of weight were identified instead with the estimate of relative frequency. If Reichenbach's theory is modified in this one respect, our conceptions would agree in all fundamental points. (LFP 176; emphasis in original) If Carnap and Reichenbach's views are so similar, what is philosophically at stake if Reichenbach's "weights" are interpreted as Carnap's estimates? As we saw in the last chapter, for Carnap, estimations of relative frequencies can be interpreted as estimation functions based on confirmation functions. Thus Carnap's passage can be read as follows. If Reichenbach would only define his notion of "weight" (discussed below) in terms of a logical concept of probability, the intellectual differences between the two on issues of probability and induction would be negligible. From Reichenbach's perspective, however, only probabilities about future happenings in the world can be used to guide scientific decision making: logical probabilities, being entirely analytic, have no stake in the future. In his 1938 book, Reichenbach is concerned with providing a theory of knowledge as a sociological phenomenon which include instances of both knowing-how and knowing-that (1938, 3). Indeed, it is only with this wider conception of knowledge that Reichenbach can study how the actions of scientists will lead to different consequences in the world and only on the basis of those consequences can a probabilistic conception of scientific knowledge be used to help guide decisions. More specifically, Reichenbach distinguishes between three tasks of epistemology: a 143 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically descriptive, critical and advisory task. The first, descriptive task is that "of giving a description of knowledge as it really is" (3). Prompted by the notion of "rational reconstruction" from Carnap's Aufbau, epistemology, for Reichenbach, is concerned with the construction of "thinking process in a way in which they ought to occur if they are to be ranged in a consistent system" rather than the psychology of actual thinking processes. It is here that Reichenbach coins a distinction familiar to many philosophers of science: epistemology is concerned with the context of justification rather than the context of discovery (5-7).242 Thus the descriptive task is the construction of the internal logic, the justification, of thinking processes which is "bound to actual thinking by the postulate of correspondence" (6). The second, critical task of epistemology involves showing that a reconstruction of a thinking process is "valid" and "reliable" – according to Reichenbach, this task usually goes by another name: the "analysis of science," or "the logic of science" (8). What matters to us, however, is Reichenbach's third, advisory task of epistemology. Like Carnap, Reichenbach is well aware of the work by Duhem and Poincaré concerning the role that conventions play in the foundations of science. On the one hand, according to Reichenbach, there are conventional decisions which result in a number of alternative but equivalent "conceptions," e.g., metrical conventions required for the measurement of time, such that the "content" of the scientific system under these different conceptions remains invariant (9). Alternatively, decisions in science (e.g., "decisions concerning the aim of science") do alter the content of alternative conceptions of a system. Reichenbach calls those decisions leading to such non-equivalent, or divergent, scientific systems or conceptions "volitional bifurcations" (9-10; 146 and also see §§23). There are two examples of such decisions in the 1938 book. The first, which I will not discuss in any detail here, are decisions about which theory of meaning to adopt; namely, a Positivist theory of meaning based on a principle of verifiability or a probabilistic theory of meaning.243 242 I leave aside the issue of whether Reichenbach properly characterizes Carnap's project in the Aufbau; see Richardson (1998). For more on the similarities between Reichenbach and Carnap regarding their shared commitment to empiricism and "voluntarism", see Richardson (2000; 2005; 2011). For more on the context of justification/discovery distinction, see Schickore (2014) and the articles in Schickore and Steinle (2006). 243 More specifically, the majority of the book is concerned with spelling out the consequences of making two different volitional bifurcations: viz. whether to adopt a Positivist verificationist theory of meaning based on a biconditional relation or a probabilistic theory of meaning based on a probabilistic relation where both relations are defined over both "sensations" and logically constructed objects. The Positivist picture of knowledge is replaced with a less idealized, probabilistic picture of the world: starting with a particular choice for how to logically reconstruct one's impressions (see chapters III and IV), Reichenbach argues that the abstract objects in the world around us, both small and large, can be logically constructed using probabilistic relations, i.e., as "reducible" and "projective" complexes (215; also see §§23-25; Galavotti, 2011a; Sober, 2011). The most basic propositions in such a picture are assigned their weights as basic posits, or wagers 144 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically The second kind of volitional decisions are "blind posits" or "wagers" concerning how to assign weights to propositions about future events (see below). Moreover, these volitional bifurcations can be nested, or what Reichenbach calls "entailed" decisions in the sense that: The system of knowledge is interconnected in such a way that some decisions are bound together; one decision, then, involves another, and, though we are free in choosing the first one, we are no longer free with respect to those following. (1938, 13) The advisory task of epistemology, however, is to suggest proposals for decisions; for example, proposals that may lead to more or less desirable divergent scientific systems (13). Moreover, this advisory role can always be collapsed to the critical task: we renounce making a proposal but instead construe a list of possible decisions, each one accompanied by its entailed decisions. So we leave the choice to our reader after showing him all factual connections to which he is bound. It is a kind of logical signpost which we erect; for each path we give its direction together with all connected directions and leave the decision as to his route to the wanderer in the forest of knowledge (14). In a way, this mapping of the consequences of volitional bifurcations resembles the means-end reasoning model of engineering we saw in chapter 3: once we lay out the consequences, all the scientist has to do is provide their preferences for each divergent system and then, for example, maximize their expected utility to find their "optimal" scientific system. Once the scientific philosopher fully maps out all the possible bifurcations and the consequences of those bifurcations, it is up to the scientist – as a wanderer in the forest of knowledge – to investigate which paths, each of which is marked off by different sequences of logical "signposts," best satisfies their own aims and preferences. The frequentist conception of probability plays a pivotal role in Reichenbach's conception of epistemology. For Reichenbach, an agent can only base their decisions about how to act in the future on empirical statements about the future and probabilities can only be assigned to those statements on the basis of weights, i.e., as "a quantity in continuous scale running from the utmost uncertainty through intermediate degrees of reliability to the highest certainty" (23). The probability of a set of propositions, then, is simply a measure of those weights assigned to each proposition. However, unlike with Waismann's logical concept of probability – for which the crucial technical device is the notion of partial implication for two sentences defined with respect and then the rest of the "world", so to speak, is probabilistically constructed from these impressions using Reichenbach's probability logic; e.g. see Reichenbach (1930; 1932; 1935; 1949). Consequently, knowledge, for Reichenbach, is not only inherently probabilistic but no proposition about the world can ever said to be known with complete certainty – verifiability is nothing more than a useful fiction (188). 145 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically to the ranges244 of those sentences – probabilities, for Reichenbach, are always measures defined as the quotient of the number of particular events from a class of events relative to another class of events (or rather, the narrowest classes for which we have reliable statistics, 316-17), i.e., probabilities are always relative frequencies (see §§34, 35). However, for Reichenbach the crucial philosophical problem has to do with how one should assign numerical probability values to propositions describing single events or the behavior of relative frequencies in the future, i.e., classes of events for which one has a limited, or no, reference class. Reichenbach's solution is that one assigns numerical weights representing the "predictive value" of these basic propositions by treating these weights as Setzungen, which in German, Reichenbach tells us, has connotations of both posits and wagers (314). Indeed, this problem gets at the root of Reichenbach's philosophical project. Although Reichenbach does not endorse the American pragmatists wholeheartedly, he acknowledges that what they got right is that a theory of meaning has to be utilizable, i.e., it must be able to inform our future actions, especially as a means to inform our predictive practices in the sciences (see 69; 73-5). However, only a completely depsychologized description of thinking processes in terms of Reichenbach's probabilistic theory of meaning can properly serve as a logic for scientific thinking, especially as a logical framework for predicting the consequences of our future actions. Indeed, Reichenbach later suggests that it is the frequentist concept of probability itself which constitutes the nerve of the system of knowledge. As long as this was not recognized – and logicians were particularly blind in this respect – the logical structure of the world was misunderstood and misinterpreted; an error which led to distorted epistemological constructions neither suiting the actual procedure of science nor satisfying the desire to understand knowledge. The concept of probability frees us from these difficulties, being the very instrument of empirical knowledge. (1938, 293; my emphasis) But with probability as the instrument of empirical knowledge, Hume's problem of induction is front and center as the immediate roadblock in our path to "satisfy" our desire to understand the structure of knowledge (73). Insofar as our practice of appealing to probabilistic judgments to inform our future behavior can be justified, those judgments must cohere with a necessary condition, viz. Reichenbach's principle of induction, required for any solution to that "insurmountable barrier" which is Hume's problem of induction (343). 244 In the sense of Johannes von Kries's notion of a Spielraum, or a basic space of events, in von Kries (1886); for more detail, see Heidelberger (2001). 146 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically Reichenbach's principle of induction states that for any sequence of events where the first n events have already been observed and m many of those first n events are "successes," letting hn denote the relative frequency m/n, there exists some small real number ε such that for any s > n, hn − ε ≤ hs ≤ hn + ε (340). Although no solution to Hume's problem, this principle states that whatever happens to the relative frequency m/n as n becomes arbitrarily large, the ratio must remain within a fixed interval. More specifically, this principle is a necessary condition for a certain reinterpretation of Hume's "vague" problem of induction with another, more tractable, problem; namely the problem "to find series of events whose frequencies of occurrence converges toward a limit" (350). Along with the introduction of a new piece of terminology – namely, for those worlds for which the relative frequency m/n exist Reichenbach calls "predictable" (350-1) – the above principle of induction, argues Reichenbach, is a necessary condition for this more exact problem of induction provided we assume that our world is, in fact, predictable (351). More specifically, for each value of n, we assume at that instant that the relative frequency m/n provides our best estimate of the limit of this sequence: this itself is a posit – if the limit exists at all, eventually we will find some constant which fixes an interval around the actual limit which will always bound the relative frequency as it converges to this limit (351). It is in this sense that the principle provides a necessary condition for induction, granted that our world is predictable. Returning to the subject of the relationship between posits and wagers, according to Reichenbach, we put forward the claim that the weight of an as yet unobserved relative frequency is equal to the value m/n as a "blind posit" even though we have no idea how reliable that posit is (353). However, the values for our blind posits are continually replaced with observed relative frequencies; Reichenbach calls this the method of anticipation: "we know that the method of making and correcting such posits must in time lead to success, in case there is a limit of the frequency" (353).245 Details aside, what matters for us is that Reichenbach's solution to Hume's problem of induction is that in making a series of wagers with nature that we live in a predictable world we show that "the applicability of the inductive principle is a necessary condition of the existence of a limit of the frequency" (356). Indeed, according to Reichenbach, this is the best we can hope for: 245 For more on how this replacement process works, see §§41, 42, especially pp. 354-6. 147 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically Hume demanded too much when he wanted justification of the inductive inference a proof that its conclusion is true. What his objections demonstrate is only that such a proof cannot be given. We do not perform, however, an inductive inference with the pretension of obtaining a true statement. What we obtain is a wager; and it is the best wager we can lay because it corresponds to a procedure the applicability of which is the necessary condition of the possibility of predictions. To fulfil the conditions sufficient for the attainment of true predictions does not lie in our power; let us be glad that we are able to fulfil at least the conditions necessary for the realization of this intrinsic aim of science. (1938, 356-7) In conclusion, putting aside the technical details of how new posits are created to form "appraised" posits using "concatenated" inductions and "cross" inductions, the point is that, for Reichenbach, each new blind posit is, in a sense, a bifurcating volition: each new posit, advised by the principle of probability, leads us to behave as if our world is an ordered, predictable world, until shown otherwise.246 Carnap's Criticism The crucial disagreement between both Reichenbach and Feigl and Carnap concerns the question of how any piece of mathematics could possibly be used to help guide decision making. However, the disagreement is not whether the probability calculus should, in general, be given a logical or frequentist meaning; instead, the central question is whether only probabilities defined in terms of relative frequencies can be a guide for decision making. Indeed, this point is at the heart of Reichenbach's epistemology. The problem with the older Positivism of Mach and Pearson is that it could not explain why probabilistic claims about the future are meaningful. Reichenbach's point, however, is that we need to assign some meaning to these claims if we are to base our decisions about how to act on the empirical information afforded to us by our current scientific knowledge. Hence the importance of Reichenbach's notion of weight: it allows us to assign meaningful probabilities to claims about the future but whether or not the actual numerical values assigned to these weights are correct is not an empirical matter. Rather, these assignments are freely chosen by us as one cannot reasonably assign relative frequencies to events that only happen once. These are purely volitional decisions about whether to adopt empirical statements, that is, decisions about which inferences or wagers about the future we wish to take 246 At least, this is the implication I gather from claims like this: "We found that the posits of the higher level are always blind posits; thus the system of knowledge, as a whole, is a blind posit. Posits of the lower levels have appraised weights; but their serviceableness depends on the unknown weights of the posits of higher levels. The uncertainty of knowledge as a whole therefore penetrates to the simplest posits we can make – those concerning the events of daily life" (1938, 401). See Reichenbach (1938, 400-404). 148 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically for granted. These are exactly the sort of empirical statements required to make sense of the inductive nature of scientific knowledge. Carnap's project differs from Reichenbach's for at least two reasons. The first reason is that, although both Carnap and Reichenbach articulate a voluntaristic conception of scientific knowledge, they differ, at face value at any rate, as to what is being freely chosen. On the one hand, for Reichenbach, we choose what wagers or posits to make about future happenings: we assign weights to empirical statements about what will happen in the world. On the other hand, Carnap would reformulate Reichenbach's notion of weight in terms of estimation functions; specifically, he would, first, construct an estimation function based on a confirmation function using the semantic resources of a logical system defined over the "empirical" sentences expressible in the object language and, second, he would give this logical system an interpretation. For Carnap the choice concerns a logical framework – a choice about how to analytically assign logical probability values to logical sentences – while, for Reichenbach, one chooses which empirical statements to place our wagers, our faith, in. For Carnap, this is a confusion of truth with probability – especially given that one of the central themes of Reichenbach's book is the replacement of the "truth theory of meaning" based on verification with a theory of meaning based on probability.247 What distinguishes concepts like confirmation from Carnap's preferred semantic concept of truth, according to Carnap, is that while the former concepts "refer to given evidence" the latter does not: While it is true that to the multiplicity of [logical probability] values in inductive logic only a dichotomy corresponds in deductive logic, nevertheless this dichotomy is not between truth and falsity of a sentence but between L-implication and non-L-implication for two sentences. (LFP177) The second, crucial reason, is that Carnap argues that it is precisely because estimates of physical quantities are analytic statements that they can be used as a guide for decision making. More to the point, Carnap disagrees with Reichenbach and Feigl that only a frequentist conception of probability can be used as a guide in life. Expectations based on a frequentist conception of probability can be falsified – they do not provide any rationale for guiding actions but instead state an empirical hypothesis about the future. As Carnap puts the point in Carnap 247 As Reichenbach concludes: "The concept of truth appears as an idealization of a weight of high degree, and the concept of meaning is the quality of being accessible to the determination of a weight" (1938, 190-1). For Carnap on the difference between truth and probability, see Carnap (1936a;b; 1937a; 1947a). 149 5.3. Feigl, Reichenbach and Justifying Induction Pragmatically (1947b), Every decision is based on expectations. To find a rational basis for decisions we must have a rational method for obtaining expectations, and, in particular, estimations. Methods of this kind are used in the customary procedures of inductive thinking, both in everyday life and in science. These customary procedures contain implicitly the concept of degree of confirmation. To make this concept and thereby the procedures based upon it explicit is the task of inductive logic.[footnote deleted] In thus helping to provide a clarified rational basis for decisions, inductive logic can serve as a tool not only for theoretical but for practical purposes. (Carnap, 1947b, 147–8; emphasis in original) According to Carnap, all that is required to guide decisions is a logical instrument with which one can systematize and make consistent one's own inductive thinking. Indeed, Carnap's central argument is that Reichenbach's notion of "weight" should be understood in terms of the logical and not the frequentist concept of probability. First, Carnap notes that not only does Reichenbach, in §32 of the 1938 book, actually describe the concept of weight as the "logical concept of probability," Reichenbach also defines the concept of weight in terms of a "predictional value" relative to "the state of our knowledge" (LFP 175). Second, as Reichenbach already defines the concept of weight in terms of betting quotients, Carnap points out logical probabilities can also be interpreted in terms of betting quotients (176; also see 237-8). The crux of the conflict between both, on one side, Feigl and Reichenbach and, on the other side, Carnap is that once we dispense with a verificationist theory of meaning it is by no means clear what roles should be assigned to probabilistic and statistical reasoning in the sciences.248 For Feigl and Reichenbach, only a frequentist concept of probability can make sense of how probabilities can be used as a guide in life: if one makes the necessary condition that the world is well-ordered or predictable, then probabilities can be used to guide our decisions and actions. Carnap turns this pragmatic justification for induction on its head. Carnap is a pluralist about the logical and frequentist meanings of probability: each is in its own right a legitimate explicandum.249 However, while the frequentist concept of probability can be defined in the object language itself (e.g., as P-rules), logical probabilities are analytic – they can be explicated as semantic concepts defined over the factual sentences of an object language. 248 However, along with Richard von Mises, Reichenbach was part of the Berlin circle and never adopted the more extreme versions of the verification principle. Also, although I can't comment on it here, there is a certain contiguity between Reichenbach's 1916 dissertation, his 1920 monograph Relativitätstheorie und Erkenntnis apriori and the 1938 book; see Friedman (2001); Glymour and Eberhardt (2014); Padovani (2008; 2011). 249 Carnap (1945b, 518). 150 5.4. Inductive Logic, Expected Utility Theory and Decision Theory Moreover, as empirical claims about the future are always uncertain, according to Carnap, only logical statements about the estimations of relative frequencies can systematize one's inductive reasoning and so provide a guide, a "clarified rational basis," for decision making.250,251 5.4 Inductive Logic, Expected Utility Theory and Decision Theory Now that we have explained Ramsey's pragmatism and Feigl and Reichenbach's pragmatic justification of induction and how Carnap need not equate inductive adequacy with past empirical success, we will now discuss how Carnap understood empirical and normative decision theory. We first quickly examine the question of how Carnap's logical meaning of probability could possibly be applied to empirical decision theory. Second, as a way to segue between empirical and normative, or rational, decision theory I discuss the influence of L. J. Savage's subjective expected utility theory (SEUT) on how Carnap understands the connections between decision theory and the foundations of statistics. Applied Decision Theory as a Methodological Problem Carnap's most sustained discussion of empirical decision theory in the 1950s, including descriptive economics dating back to Laplace and Daniel Bernoulli, can be found in sections 50 and 51 of LFP. There Carnap couches the relationship between empirical decision theory and inductive logic in terms of methodological rules: it is the result of a practical decision, for example, to adopt a methodological rule which states that rational agents are expected utility maximizers or that agents should obey Carnap's principle of total evidence. Strictly speaking, such rules do not belong to inductive logic at all; indeed, just like with deductive logic, inductive logic is "indifferent to our needs and purposes both in practical life and in theoretical work" (204; also see 253-4). Rather, such rules belong to the methodology of science, or rather, the "methodology 250 See LFP 252. For lack of space, I cannot, unfortunately, present a more balanced view of Reichenbach's position; those interested should consult his last book on the subject, Reichenbach (1949). As Reichenbach writes to Carnap in 1949, "You say that my theory has to be supplemented by your concept. I have always suspected that you never understood my theory correctly, and this remark makes my suspicions highly probable"; and later in the same letter, referring to Carnap (1947b), Reichenbach says "Strangely enough, your paper [. . . ] makes no attempt at showing that your probability is a good guide for action" (Reichenbach to Carnap, December 6, 1949; HR 032-17-13). 251 It is important to note that Carnap is not here endorsing a subjective meaning of probability: in this way, Carnap's argument should seem slightly foreign to contemporary philosophers of probability. 151 5.4. Inductive Logic, Expected Utility Theory and Decision Theory of induction" (204). The situation, remarks Carnap, is similar to geometry. One can study the theorems of axiomatic geometry without ever talking about its application in physics; however, unlike with geometry, there is at the present time not yet sufficient clarity and agreement even among the writers in the field concerning the nature of the theory [of inductive logic] and the connection between theory and practical application. Therefore today a book on inductive logic is compelled to devote a considerable part of its space to a discussion of methodological problems. (LFP 204-5) Carnap explicitly discusses empirical decision theory and traditional concepts of rational behavior from economics beginning with section 50 of LFP : Carnap later uses this discussion to investigate how a pure inductive logic can be applied to decision theory. There, Carnap considers the case of an investigator X who has already adopted an inductive logic and has encapsulated their total available evidence into a single statement, e. X's aim is then to formulate a rule which they can adopt to make rational decisions on the basis of logical probabilities but note that this is not, for Carnap, a problem of pure inductive logic – where by "rational" Carnap just means "to learn from experience and hence to take as evidence what he has observed" (253). (This is a low bar for rationality.) Problems like this one, Carnap tells us, have an empirical basis as they "belong to the methodology of a special branch of empirical science, the psychology of valuations as a part of the theory of human behavior [. . . ]" (254). For the rest of section 50 and on into section 51, Carnap considers a total of six possible candidates for a methodological rule with which to guide X's actions based on logical probabilities. I will spare the reader the details but in each case the reasoning pattern is the same: for each rule – where one (R1) is based on high probabilities, one (R2) on maximum probabilities, the third (R3) based on the use of estimates, and the last two (R4 and R∗4) rules are defined in terms of maximizing estimated gain – Carnap first gives an explicit definition of the rule and then discusses several examples in which X faces a particular knowledge situation. In each case, however, Carnap argues that the rule is somehow inadequate for the knowledge situation at hand. What is of note is that, at least with the last two rules, Carnap at first represents X's preferences in terms of whether they gain or lose something of value due to a bet (260). However, because X may be more or less willing to take a bet not only because of how much they expect to win or lose but because of the risks involved with the bet (e.g., they bet their entire life-savings), Carnap suggests we instead turn to the economic concept of utility. In particular, 152 5.4. Inductive Logic, Expected Utility Theory and Decision Theory Carnap considers Daniel Bernoulli's law of diminishing marginal utility, a psychological law of utility which lets us account for X's unwillingness to take risks.252 Carnap eventually settles on the procedure of choosing that decision which maximizes X's estimated utility over a set of possible actions, which is rule R5, as an adequate rule for informing our practical decisions (269). Moreover, as the estimation of utility requires the introduction of a logical concept of probability, Carnap's inductive logic can be easily put to use to help guide decisions relative, of course, to one's total evidence e. Carnap is also fully aware of the methodological difficulties with utility theory, including the problem of how to actually measure utilities from the behavior of actual persons.253 Indeed, Carnap even points us to recent work in economics and the recent developments in game theory on this problem, including work by Ragner Frisch, Oscar Lange, Harold T. Davis, Paul A. Samuelson and especially the second, 1947, edition of John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior (which includes an added section on utility theory) (LFP 268). Yet Carnap is quick to note that all of these authors adopt, even if implicitly, a frequentist interpretation of probability (268).254 Carnap then goes on to argue that without inductive logic, the "domain of application" of these empirical theories is too narrow as we lack the knowledge required to provide the values of those relative frequencies required by these theories (269). Reminiscent of his attempt to intervene in the foundations of statistics (see chapter 4), Carnap then argues that we could provide estimates of these relative frequencies if only economists would adopt a logical concept of probability (see §51C). From Wald's Statistical Decisions to Savage's Normative Decision Theory I suggest we view Carnap's passages here within a wider historical context, a context that will help us understand how Carnap would eventually come to draw a distinction in his 1962 252 For more details, see §51B, where Carnap discusses Daniel Bernoulli's work, including the law of marginal utility, in more detail. Interestingly, Carnap also not only compares Daniel Bernoulli's law and the WebnerFechner law from psychophysics but also discusses Bernoulli's law in great detail with regard to work by his colleague from Vienna, Karl Menger (LFP 272-273). 253 A problem which Carnap suggests can be eased by assuming what is of interest is "rational behavior," LFP 267-8. Also see Heukelom (2014), p. 22, for a similar use "rational behavior" by von Neumann and Morgenstern. Also, Carnap and von Neumann share a common intellectual thread: Hilbert's work on axiomatic systems, see Leonard (2010). 254 Carnap, however, is surprised to learn from Kenneth Arrow that von Neumann advocated for an entirely subjectivist meaning of probability, Arrow to Carnap, August 1, 1949, RC 084-04-01. 153 5.4. Inductive Logic, Expected Utility Theory and Decision Theory paper between empirical and normative decision theory. Specifically, this is a distinction Carnap adopts in his later work on applied inductive logic in tandem with one of the central documents associated with the rise of Bayesianism in the twentieth century: Leonard J. Savage's 1954 book The Foundations of Statistics.255 As discussed in detail by Giocoli (2013), Savage's book can be split into two parts. In the first part, Savage continues in the tradition of Ramsey and de Finetti by providing a representation theorem for an expected utility theory based on a subjective, or personalist, concept of probability. Namely, provided that a person X chooses that action which maximizes their subjective expected utility based on their preferences for each possible consequence of each action available to them with respect to the relevant possible states of the world, Savage shows that if their preferences satisfy a certain number of axioms (including Savage's "sure-thing" principle), there exists a unique (up to a linear transformation) utility function U and subjective probability function P allowing X to explicitly calculate their subjective expected utilities for each action available to them.256 Moreover, in contrast to von Neumann and Morgenstern's expected utility theory, Savage explicitly provides both a normative and empirical interpretation of his axioms: if the axioms are empirical, the revealed U and P reflect the subjective opinions, or credences, and utilities of actual persons and if the axioms are interpretively normatively, U and P reflect the subjective opinions and utilities of idealized, "logically consistent" agents.257 Indeed, according to one recent historian of economics, Savage introduces this empirical/normative distinction in response to empirical, questionnaire-based data collected by the French economist Maurice Allais who claimed that humans routinely violated Savage's axioms (published as Allais 1953):258 255 I cite from the second, 1972 edition which only adds footnotes but leaves the original text unadulterated. 256 The idea is that if ≺ is a partial preference ordering over actions Ai, i = 1, , n, where the possible states of the world are Sj , j = 1, ,m, and the possible outcomes of each action-state are represented by a n×m-matrix with elements Oi,j , then one can show that: Ax ≺ Ay iff ∑ j U(Ox,j)P (Sj) ≤ ∑ j U(Oy,j)P (Sj), where U and P are shown to be unique and ≺ satisfies the (S)EUT axioms. See Eells (1982); Skyrms (1975; 1984) and (Fishburn, 1981, §3.1). 257 For example, Savage says: "The principle value of logic, however, is in connection with its normative interpretation, that is, as a set of criteria by which to detect, with sufficient trouble, any inconsistencies there may be among our beliefs, and to derive from the beliefs we already hold such new ones as consistency demands" (1954, 20). 258 Allais did not originally criticize Savage's axioms per se but rather prior joint work by Savage and the economist Milton Friedman; see Heukelom (2014, 47-64). However, note that Carnap himself draws a distinction between normative and descriptive conceptions of logic in his discussion of psychologism in 1950 (LFP 46). 154 5.4. Inductive Logic, Expected Utility Theory and Decision Theory Savage's distinction between a normative and an empirical interpretation of the axioms is hence best read as an attempt to save the axioms while at the same time accommodating the questionnaire results by relegating them to a different, empirical, domain. Indeed, one equally has to appreciate this clever move by Savage. It silenced Allais and contributed to the impact of Savage's book. In addition, however, Savage's normative-empirical distinction unintentionally produced a whole new development in the communities of mathematical psychology and behavioral decision research. (Heukelom, 2014, 60) The second half of Savage's book deals with what seems like another topic altogether: the idea that the fundamental unit of statistical reasoning is not statistical inference but statistical decision. The central figure here is the mathematician and statistician Abraham Wald.259 Fortunately, the historian of economics Robert Leonard has already explained in detail Wald's intellectual development while Wald worked with Morgenstern in "Red" Vienna during the 1920s and 1930s, a narrative which includes, besides the members of the Wiener Kreis, especially Otto Neurath, supporting historical actors like Karl Menger, Friedrich Hayek and Ludwig von Mises.260 Importantly, it was during this time that Wald (who was a student of Karl Menger) became familiar with von Neumann's celebrated 1928 Minimax Theorem; viz. when two players are playing a two-person zero-sum game, neither player can do better than to choose those respective strategies for which player 1 maximizes their minimal payoff and player 2 minimizes the maximal payoff of player 1.261 The relevance of this background is that Wald, between 1939 and 1950, pioneered the notion of a statistical decision problem.262 In Wald (1939), statistical problems are to be reconstructed as problems about what decisions to make regarding an experiment, i.e., whether to accept or reject a hypothesis, on the basis of a calculation using a "loss" function and an a priori probability distribution both of which are defined over the possible experimental decisions. The central idea is that the statistician should make that decision which minimizes their maximal loss. In the 1940s, Wald extended his formalism twice over (see Wald, 1945a;b; 1950): first, he considered sequences of decisions (e.g. allowing for the analysis something like what are now called "stopping rules" in sequential analysis) and, second, he combined statistical decisions with von Neumann's minimax theorem: the experimenter can 259 Although a first-rate mathematician from Romania, labeled an "Ostjude," Wald had difficulties obtaining an academic position in Vienna after 1934 and barely managed to escape to the United States in 1938; see Leonard, 2010, 150-5, 173-80. 260 See chapters five through eight in Leonard (2010). 261 See von Neumann (1928), Tjeldsen (2001), Leondard (2010, 62-70) and Luce and Raiffa (1957; appendices two through six). 262 Tragically, Wald, along with his wife, died in 1950 in a plane crash while in route to India (Savage 1951, 66). 155 5.4. Inductive Logic, Expected Utility Theory and Decision Theory be said to be playing a two-person zero-sum "game against nature" where the experimenter will make that decision that minimizes their maximal loss while "Nature," so to speak, attempts to maximize the minimal loss of the experimenter.263 The relevance of Wald's project for Savage's book is that, as Giocoli emphasizes, Savage adopts Wald's statistical decision framework but replaces Wald's objective probability function with his own subjective probability function.264 In Savage (1951), for example, Savage revamps Wald's use of the minimax principle so as to minimize expected loss using one's subjective probability function and, in the 1954 book, attempts to generalize the minimax principle within his SEUT (see Giocoli, 2013, 86). The point of this is that the entire aim of Savage's 1954 book is to use Wald's framework of statistical decisions plus Savage's (normative) axioms for his SEUT to reconstruct traditional inferential statistics, including point-estimation, interval estimation and statistical hypothesis testing as all belonging to decision making under uncertainty.265 Although Savage's attempt to reconstruct traditional statistics ultimately fails,266 what is relevant to us is that as Carnap collaborated with Savage in the early 1950s, Carnap himself was an important actor in Savage's attempt (which can be traced back to Neyman's notion of "inductive behavior" in 1938 and the concept of a loss function defined by William Gosset, a.k.a. "Student"267) to replace the central concept in the dominant research area of the "British263 See Giocoli (2013) for the details; in particular, Giocoli emphasizes the fact that Wald treated both his a priori probability distribution and his "game against Nature" metaphor as merely instrumentally useful tools. 264 Savage, who was both trained a mathematician and previously von Neumann's personal assistant in 1941 at Princeton (Heukelom, 2014, 29), worked together with Wald at the Columbia University's Statistical Research Group during the Second World War (see Wallis 1980). Moreover, Savage had a nontrivial role in founding the University of Chicago's department of statistics in 1946 (see Stigler 2013 and, in general, Agresti and Meng 2013). 265 For an overview, see sections 13.6 through 13.10 of Luce and Raiffa (1957). According to Savage, "in personalistic terms, I would say that statistics is largely devoted to exploiting similarities in judgments of certain classes of people and in seeking devices, notably relevant observation, that tend to minimize their differences" (1954, 156). 266 In Savage (1964), Savage acknowledges the failure given the aim of the second part of his book to reconstruct traditional statistical inference on the basis of his subjective version of the minimax principle. However, by this time Savage has adopted the more "standard" Bayesian position, i.e., acceptance of the Likelihood Principle and the requirement that all one need to do is fix a prior subjective probability distribution. On subjective Bayesianism and its possibility for providing a foundation for statistical inference, see Birnbaum (1962) and Sober (2008). For an alternative take (and an alternative historical trajectory) on likelihoods based on a logic of support defined in terms of chance-setups, see Hacking (1965). Moreover, as Savage later points out, he prefers to use the probability nomenclature "personalist," "objective" and "necessitarian" instead of "subjective" and "objective" probabilities as frequentist statistics contains subjectivist features (e.g., the choice of a level of significance or confidence interval) whereas "subjective" Bayesian statistics attempts to locate the objective features of personal probabilities using, e.g., the likelihood principle (1964, 178–9). 267 Giocoli (2013) discusses Gosset's work prior to Neyman. 156 5.4. Inductive Logic, Expected Utility Theory and Decision Theory American" school led by R. A. Fisher, viz. the concept of a statistical inference, with the concept of a statistical decision. Indeed, this puts Carnap's so-called "Bayesianism" in a wider context within the social sciences than just another iteration of the probabilistic ideas from Waismann, Jeffreys, Ramsey and de Finetti.268 Thus, when Kemeny, Shimony and Carnap work through de Finetti's work on coherence (see the next section), they are doing so while the seeds of a "Behavioralist" revolution are taking place in theoretical statistics and economics (see Heukelom 2014 for the details). Finally, Carnap, of course, discusses Wald's work in the appendix to his 1952 The Continuum of Inductive Methods (CIM ). As Carnap says in the appendix, he tried to provide general conditions which characterize "all historically known estimation-functions" for relative frequencies, viz. the conditions C1-10 (see section 4.4). However, Wald's statistical decisions functions posed a problem for Carnap. Relying on recent work in the field which applies Wald's minimax principle to the problem of point estimation, Carnap explains that an estimation function satisfying Wald's minimax principle is not in the λ-system, i.e., Carnap's λ-system is not as general as he thought.269 Carnap tries to save face by first arguing that a particular estimation function, eW , defined for an arbitrarily general binomially distributed random variable (see Hodges and Lehman, 1950) violates the requirement of additivity and hence is not an adequate estimation function (see CIM 84-5). Secondly, Carnap makes the following claim. Assuming that λ equals the square root of the sample size s, it is possible to generalize eW as an estimation function e′ (see equation 25-10 in CIM ) and Carnap then shows that this function is both additive and is in the λ-system but yet nevertheless violates the minimax principle.270 Fortunately, upon reading Carnap's new monograph Savage quickly writes to Carnap to point out two technical errors. The first error is an artifact of Carnap's choice of a counter268 Galavotti (2005), for example, places Carnap's Bayesian origins in this narrower context. Unfortunately, I have not yet been to Savage's archives and so I hesitate to discuss in any more depth the intellectual relationship between Carnap and Savage. It is an open question, e.g., whether Carnap influenced Savage and, if so, how. 269 The paper is Hodges and Lehman (1950). As far as I know, Carnap and Savage first correspond about the minimax principle in two letters from 1951 after Carnap writes a draft of the Appendix: Carnap to Savage, July 14, 1951, RC 084-52-28 and Savage to Carnap, July 24, 1951, RC 084-52-27. 270 In place of Wald's objective probability function Carnap suggests we substitute an estimation function based on a confirmation function in the λ-system. Assuming that the loss (or risk) function is given by mean squared error of this estimate, Carnap borrows a technique from Hodges and Lehman (1950) which assumes the parameter for which we wish to provide the estimate has a binomial distribution; Carnap defines the estimation function as eW = (1 + √ s) − 1/(sM/√s + 1/2), for sample size s and sM many samples with the property M (CIM, 85). 157 5.5. Rationalizing Decision Theory and Justifying Inductive Logic example: the molecular property Carnap uses to show that eW is not additive is not arbitrary.271 The second error is far more serious. Savage shows that if λ varies with any function of s then conditions C1-10 are violated; hence, e′ is not in the λ-system either.272 To my knowledge, Carnap never publishes anything on Wald's work again. Nevertheless, there is an important philosophical lesson here about Carnap's mature project to draw from this incident. Unlike in LSL, where Carnap claims that there is reasonable enough agreement amongst working mathematicians regarding how to define the usual deductive consequence relations in, say, simple type theory, no such agreement amongst statisticians and probability theorists exists for how to define "typical" or "standard" inductive methods in a similarly constructed statistical framework. Indeed, Carnap's attempt to parameterize all "historically known" estimation functions in CIM fails to capture those estimation functions based on a new principle with which Savage proposed to lay the foundations for all of theoretical statistics and decision theory, i.e., the minimax principle. Episodes like this forced Carnap to rethink his work in inductive logic as simply providing logical tools without much input from working statisticians; here, to use a phrase from Carus, the "practical realm kicks back." 5.5 Rationalizing Decision Theory and Justifying Inductive Logic Savage's SEUT and de Finetti's own probabilism sets the context for the discussion later in this section concerned with untangling the differences between Carnap and his colleagues – especially Shimony, Kemeny, John W. Lenz, Feigl and Hempel – regarding the sense in which inductive logic should be justified and why. Carnap's view is entirely instrumental: inductive logic is an instrument which can be used to systematize inductive thinking but, remaining true to Frege's criticism of psychologism, no interpretation of an inductive logic need prescribe how agents ought to think. In reference to Carnap's quote at the beginning of this chapter: logic has no more to do with the nature of thinking than does mineralogy. Nevertheless, we will see in this section, Carnap's colleagues, especially Shimony, will come to adopt something like 271 Savage to Carnap, February 20, 1952; RC 084-52-26. 272 Savage to Carnap, February 24, 1952, RC 084-52-25. In a letter dated April 11, 1953, Carnap reports on the basis of Savage's result that Kemeny and himself have shown the following two claims in the interim: "(3) For a given κ, one value of G (e.g., G(κ, 1, 0)) is sufficient to determine all values of G and hence all values of c" and "(4) Therefore it suffices to use G(κ) instead of G(κ, s, si), and hence λ(κ) instead of λ(κ, s, si)" (underlining in original; RC 084-52-22; p. 2). 158 5.5. Rationalizing Decision Theory and Justifying Inductive Logic a prescriptive account of the justification of inductive logic. This background is crucial for understanding Carnap's own attempt, in response to an article by Lenz, to explain his views on the problem of justifying induction in the mid-1950s in two unpublished manuscripts. Dutch Books and Inductive Dissent One way of clarifying Ramsey's notion of a "consistent" system of degrees of partial belief is as follows. Provided that the betting quotients for a (hypothetical) system of bets are in accordance with an agent's belief function B, then B is "consistent" if under all possible outcomes, the summation of all the gains for this system is equal to zero. Ramsey stated without proof and de Finetti (1931; 1937) proved rigorously the following claim: If one's belief function B is consistent, then B obeys the probability axioms. Replacing the belief function B with a confirmation function c, the relevance of this result for inductive logic is that it provides a rigorous way to interpret the values of a confirmation function in terms of betting quotients: if the betting system is consistent then the confirmation function c satisfies the probability axioms. However, as far as Carnap and his peers like Kemeny and Shimony understood the situation, although de Finetti proved that c obeying the probability axioms is a necessary condition for coherence, it was still an open question whether the sufficiency direction holds. Thus, Carnap took it as an important achievement when Kemeny, in 1953, proved that the sufficiency direction does, in fact, hold: if the confirmation function c obeys the usually probability axioms, then it is coherent.273 Moreover, Shimony, first in his 1953 dissertation for Yale and later in Shimony 273 Kemeny finally published the results in Kemeny (1955); also see Shimony (1955) and Lehman (1955). Carnap reports Kemeny and Shimony's results to Savage in a letter dated April 11, 1953; RC 084-5222. Two caveats. The first regards terminology. What Kemeny calls a "fair" betting system, Shimony a "coherent," de Finetti a "cohérent" and Ramsey a "consistent" betting system (or, rather, the some belief, probability or confirmation which accords with the betting system). All amount, roughly speaking, to the same mathematical concept (also see Carnap, 1971a, 114–116; Shimony, 1955, 8). As Carnap is quick to point out, this is a different concept from a "fair" bet between two persons if the expected outcomes favor neither person; this is obviously a different concepts from a "fair" betting system. The second caveat is that, contrary to what Carnap and Kemeny thought in 1955, de Finetti did, in fact, prove both the necessary and sufficiency directions (in de Finetti 1931; 1937). However, Kemeny reports that while translating de Finetti's papers on probability theory with Carnap and Shimony (presumably while together at Princeton sometime in 1952), they were all confident that de Finetti did not prove sufficiency (Kemeny to Carnap, Aug. 20, 1955, RC 083-18-03). Nevertheless, in 1955, after reading Kemeny (1955), Savage writes to Carnap saying despite not having de Finetti's papers in front of him, he was nevertheless confident that de Finetti showed both necessity and sufficiency (Savage to Carnap, July 12, 1955, RC 083-18-02). Indeed, even as late as 1958, Richard C. Jeffrey, after working through de Finetti's papers in detail, writes to Carnap (along with Putnam and Hempel) to report that it seems like de Finetti did prove both the sufficiency and necessity directions (Jeffrey to Carnap, February 28, 1958; RC 083-04-44). 159 5.5. Rationalizing Decision Theory and Justifying Inductive Logic (1955), coins a stronger version of coherence, called "strict coherence": a betting system which adheres to the confirmation function c is strictly coherent if there does not exist any combination of consequences for which a loss is possible but positive gain impossible. Provided the number of consequences is finite, Kemeny also showed that c is strictly coherent if and only if c is a regular confirmation function. The mathematical details (and the various extensions) of the so-called "Dutch Book" arguments are well known and little would be gained, I suspect, by rehearsing those details here.274 Instead, I draw the reader's attention to what Carnap and his peers took to be the philosophical import of these mathematical results. For Carnap, Kemeny's result is an application of inductive logic par excellence as Kemeny's result shows us exactly how to coordinate, for a particular set of sentence pairs H and (not L-false) E in L (or, alternatively, propositions H, E in L ), all the values c(H,E) with betting quotients derived from bets on H given E in a coherent betting system. The probabilities given by such an interpreted c can then be used to calculate the expectations for physical quantities and so provide a guide for decision making. However, whereas Carnap was worried about the problem of providing an inductive logic with an interpretation, Kemeny and Shimony understood their results as solving an epistemological problem. For example, in his review of Carnap's LFP, Kemeny summarizes Carnap's project in terms of two problems Carnap himself states on page 222 of LFP : In summary, Carnap poses himself a two-fold problem of explication: (1) Measuring the various factors influencing c (see §46), and (2) finding a numerical function of these various parameters. This seems to be an excellent problem, but surprisingly enough Carnap does not follow his own outline. He tries to solve (2) directly, apparently hoping to solve (1) implicitly. This may have been an unfortunate decision. (Kemeny 1951, 150) For Kemeny, explication seems to resemble an empirical problem, i.e., that of "measuring," presumably intuitive, factors and finding confirmation functions which match those factors, as "parameters."275 For Carnap, however, these problems belong to the methodology of induction and probability and are not problems of pure inductive logic. Kemeny nevertheless criticizes Carnap for not justifying his first two probability axioms in LFP, what Carnap calls "conventions of adequacy," and Kemeny also gives an argument for why Carnap's first condition – the so-called 274 For example, see Hajek (2005) and the references therein. 275 Later, when Kemeny adopts Carnap's language of explication he motivates his own work on model-based measure functions by saying that such a function "was used to explicate (give a precise definition in agreement with intuition for) an intuitive concept" (1953, 307). 160 5.5. Rationalizing Decision Theory and Justifying Inductive Logic "general multiplication principle," i.e., that c(h ∧ j, e) = c(h, e) × c(j, e ∧ h) – makes learning from experience based on a confirmation function defined using this convention problematic (Kemeny, 1951, 150-154). The lesson we should draw from this, says Kemeny, is that The justification of our explicatum should be that it is the only one satisfying certain intuitive conditions of adequacy; it should not merely be the negative fact that the examples so far calculated are not clearly counter-intuitive. (Kemeny, 1951, 155) Indeed, although Carnap quickly writes to Kemeny pointing out the flaw in Kemeny's argument, Kemeny continues to adopt the epistemological language of justification in Kemeny (1955).276 It is there, for example, that Kemeny claims that while the probability axioms, interpreted as relative frequencies, in Reichenbach (1949) are "clearly justified," no such justification exists for inductive probabilities (1955, 263). Kemeny's own Dutch Book result, then, is intended to provide such a justification. On the one hand, Kemeny says that the task of his paper "is to show that the probability axioms are necessary and sufficient conditions to assure that the degrees of confirmation form a set of fair betting quotients" (263; here, fair is the same as coherent). On the other hand, once he has shown this to be the case, Kemeny says, "we hope to have justified these five conditions [corresponding to the probability axioms CFF] as conditions of adequacy for a definition of inductive probability" (272). Inductive logic isn't just a piece of mathematics for Kemeny: we need to provide justificatory reasons for why we chose the axioms we did. Whatever we may think of Kemeny's 1951 critique of LFP it had an immediate impact on Shimony's views on probability: Kemeny, says Shimony, "had clearly opened the important question of how [Carnap's four "conventions of adequacy" CFF] could be justified by more than convention" (Shimony 1992, 269).277 Motivated by Kemeny's criticism, Shimony says that in the summer of 1952 I worked on this problem, partly by reading through the bibliography of Carnap's [LFP]. Fortunately, 'De Finetti' was early in the bibliography, and I noticed that his method of justifying the principles of the calculus of probability (independently discovered by F. P. Ramsey) applied to Carnap's logical concept of probability as well as to their own personalist concept. (Shimony, 1992, 269) The fruits of Shimony's labor, including Shimony's results involving strict coherence, would eventually find its way into Shimony's 1953 dissertation for the philosophy department at Yale University. It should be noted that Shimony kept Carnap informed of his work – by the winter 276 Carnap points out to Kemeny that only the simple multiplication principle is used in the paper and that Kemeny's worry is not generalizable; Carnap to Kemeny, December 3, 1951; RC 083-18-30. 277 See C53-1 in §53 of LFP. 161 5.5. Rationalizing Decision Theory and Justifying Inductive Logic of 1952, for example, Carnap had read the first four chapters of Shimony's dissertation. Nevertheless, Shimony also makes clear that he was no convert to Carnapian explication.278 Indeed, under the influence of C. S. Peirce, Alfred North Whitehead and Kurt Gödel, Shimony makes the striking claim that he "never abandoned the idea that metaphysical inquiry is possible and fruitful" (1992, 261). In particular, Shimony argues that Carnap's work on inductive logic is incomplete because there is "no intuitive and a priori method for choosing a small range of acceptable values of λ out of the half-infinite interval [0,∞)" (1992, 269). Thus, as Shimony reports, the role of the concept of strict coherence was to find "exhaustive and adequate rules for employing" comparative and quantitative concepts of confirmation (1955, 2). By showing the equivalence between regular confirmation functions and strict coherence, Shimony argues that "the axioms of quantitative concept of confirmation are justified, in that they are necessary conditions for the coherence, and hence for the rationality, of beliefs" (1955, 5). Of course, Shimony's notion of justification is not Carnap's – whereas Carnap is only trying to find reasonable confirmation values, Shimony, drawing on work by Kurt Gödel, argues for a metaphysical "Principle of Coherence" with which to justify inductive logic.279 Although Carnap, Kemeny and Shimony worked with the same mathematical results they disagreed, at least in their printed writing, about the epistemological and metaphysical interpretations of those results. Even though we have no records of what Kemeny, Carnap and Shimony discussed in person, fortunately, Carnap did attempt to convey his personal views on the application of inductive logic in private: in 1956 he found the time and motivation to write on the topic of justifying induction. The occasion was a paper written in 1956 by one of Carnap's former students from the Chicago days, John W. Lenz. Reminiscent of Shimony's worry, the "central problem" of inductive logic, Lenz suggests, is to find a definition of "degree of confirmation" which uniquely determines a probability function relative to a pair of hypotheses and 278 Shimony first describes his work to Carnap in October; Shimony to Carnap, October 9, 1952; RC 084-5608. By the start of December, Carnap read and sent comments back to Shimony; Shimony to Carnap, December 2, RC 084-56-06. Incidentally, in the letter dated December 2nd, Shimony tells Carnap about the independent discovery of a version of the lambda system by W. E. Johnson posthumously published as Johnson (1932); see Zabell (2005). 279 The principle states: "the concepts of confirmation are such that sets of beliefs determined by correct confirmation judgments are coherent" and this principle is a true proposition which enjoys an objective, "Platonic existence," i.e., it is analytic in the intuitive, "non-semantic" sense (1955, 10-11; also see 27-28). Similar views are expressed in Shimony's dissertation, especially chapter four. The dissertation is located in the Abner Shimony papers at ASP at the University of Pittsburgh, box 5, folder 13. 162 5.5. Rationalizing Decision Theory and Justifying Inductive Logic evidence statements (230). Reichenbach's answer to the problem, according to Lenz, is purely negative: "it is impossible," says Lenz regarding Reichenbach's view, "to find a non-arbitrary definition of "degree of confirmation" without either presupposing the frequency view or else committing oneself to the a priori" (230). Conversely, Carnap's λ-system provides us with an infinite number of well-defined confirmation functions but provides no recourse for choosing one function in particular. Granting to Carnap that such a choice is a practical matter, Lenz points us to those passages in CIM where Carnap seems to provide at least three separate "criteria of adequacy" for choosing a particular value of λ, viz. criteria based on "performance, economy and esthetic satisfaction" (231). Lenz then focuses on what he takes to be Carnap's preferred criterion of adequacy: performance. In particular, Lenz considers the following example. Supposing that either black or white balls are being drawn from an urn (with replacement), Lenz considers the case where a chosen confirmation function, say C, gives a probability value of .75 to the hypothesis H that the next ball drawn ball is black given that the evidence E states that 80 black and 20 white balls have already been drawn. Moreover, suppose a person is forced to make a decision problem D about how to act conditional on whether the next observed ball is either black or white. In such a circumstance, Lenz says that Carnap would consider the analytic statement 'C(H,E) = .75' to be a useful guide for the decision problem D. As Lenz puts the point: Other things being equal, the "reasonable" man will act in accordance with hypotheses which, on the evidence, are highly confirmed rather than in accordance with those which are only slightly confirmed. (Lenz, 1956, 232) Thus, based on the practical decision to use this function C, it will be "reasonable" by Carnap's own standards to make a decision for D guided by the idea that the next ball is more likely black than white. Nevertheless, because this probability statement is analytic it is not what Lenz calls "predictive" as it is "without factual content": In no sense, though, can the probability value itself be regarded as predicting anything, say, that the next ball will be black, or even that, were we to keep drawing, the long run relative frequency of black balls is .75. If the more highly confirmed hypothesis turned out to be false, in no sense would the probability value be regarded as disconfirmed. (Lenz, 1956, 232) As an illustration for why Lenz finds all this problematic, Lenz next asks us to consider the case not of a single urn but many urns such that for each of these urns, a person observed that 80 black balls and 20 white balls have so far been drawn. Further suppose that, for each urn, 163 5.5. Rationalizing Decision Theory and Justifying Inductive Logic there is a corresponding decision problem which depends on whether the 101th ball is black or white. In each case, suggests Lenz, the person would find it reasonable, assuming the estimate that the next ball drawn from the urn is black is the same as the first urn, i.e., 'C(Hi, Ei) = .75' for each urn i, to act, in each case, as if the next ball is black. But then Lenz adds the following twist: suppose black balls were drawn from only half of the urns. "What shall we conclude?" says Lenz, We surely can not literally say that the probability values have been disconfirmed. After all they were not predictive, and they were, as a matter of fact, logically true on the basis of the c-function employed. We might say that had we acted on the basis of the highly confirmed hypotheses we had acted reasonably. We might say that we had acted reasonably and complain that the world itself was unreasonable. (1956, 232-3) The more reasonable conclusion, suggests Lenz, is that the chosen confirmation function C was not, after all, an adequate function for guiding our decisions. In short, Lenz argues that whether C is adequate and reasonable depends on the past empirical success of C (233). However, if this is the proper notion of adequacy, provided we don't know the actual relative frequencies beforehand, how should Carnap choose an adequate confirmation function? "The difficulty in all this," says Lenz, should now be apparent. The decision to pick for future use the c-function which had been most adequate in the past is based upon the hypothesis that that c-function will continue to be successful in the future. This hypothesis is a synthetic one which, however, is not known to be true. Thus, this hypothesis itself demands justification. (234) The decision to choose a confirmation function cannot be a practical decision, argues Lenz, for if that function is adequate then it must satisfy a synthetic principle of induction (234). Nor can Carnap, according to Lenz, argue that it is only probable that C will be an adequate confirmation function or that other confirmation functions can be used to measure the success of C: in the first case we are led to an infinite regress and in second case our justification would be circular (234-5). Thus, the only plausible solution seems to be some metaphysical commitment to the synthetic principle of induction. Although Carnap would reject any such commitment, Lenz sees no other way to define a criteria of adequacy if probabilities are to be a guide for decision making. Lenz's argument poses two particularly pertinent problems for Carnap's project in inductive logic. First, it reinforces the idea shared by other logical empiricists, namely Feigl and Reichenbach, that, unlike for the frequentist meaning of probability, some metaphysical principle seems 164 5.5. Rationalizing Decision Theory and Justifying Inductive Logic to implicitly ground the application of a logical concept of probability if it is to be a guide for decision making. Indeed, Feigl, writes to Carnap in the fall of 1956 remarking that Lenz raises (without reference to Burks or to my "Sci. Meth without Metaph. Presupp's") exactly (& more explicitly) the question I've tried to put to you for many years. Maybe he & I are stupid or confused – but I think the questions are serious & important. Of course, he doesn't answer them – & my answer may not be good enough. But do let us have your answer! (Feigl to Carnap, August 3, 1956; RC 089-57-02; underlining in original; ) Secondly, Lenz's argument that Carnap is best read as implicitly assuming a synthetic principle of induction is anathema to Carnap's own distinction between the practical and theoretical. To be clear, Lenz is not arguing that Carnap is committed to any such synthetic principle via his work on inductive logic qua logic; rather, the synthetic principle is sneaked in, so to speak, when Carnap suggests that a particular application of an interpreted inductive logic is adequate, where adequacy is equated with empirical success. Yet the whole point for Carnap is that there is a difference between pure and applied logic and that empirical success in the "real world" is but one measure by which the adequacy of an inductive logic can be evaluated: there is for Carnap no "middle" synthetic or transcendental ground between logic and its interpretation. But if not as a synthetic principle, what could Carnap possibly mean by an adequate and/or reasonable confirmation function? Carnap on Justifying Inductive Logic In the fall of 1956, Carnap wrote a short, ten-page manuscript called "On the Choice of an Inductive Method. A Reply to John W. Lenz" (RC 089-57-01) copies of which he sent to his peers, including Hempel, Feigl and Lenz. In the manuscript, Carnap acknowledges that his previous published comments on the problem of choosing a confirmation function as a means to characterize an inductive method have been sparse and says that "I agree with Lenz that difficult questions are here involved which still wait for a solution" (ibid., 2). Ultimately, however, Carnap is dismissive of Lenz's problem. According to Carnap, the basis of Lenz's argument is that if an observer X decides that the function c′ is adequate, i.e., "more successful than other c-functions which [X] takes into consideration," then whether or not it is reasonable for X to continue to use c′ depends on whether X is committed to the principle P "by which [X] can infer from the observed adequacy of c′ in the past that c′ will likewise be adequate in the future" (ibid., 3). 165 5.5. Rationalizing Decision Theory and Justifying Inductive Logic Carnap agrees that X could indeed base their decision to keep using c′ on some principle P ; nevertheless, Carnap is quick to point out that it is not necessary that X must do so. Instead, Carnap says that "[t]he decision is reasonable because c′ was found adequate in the past; and that is sufficient" – whether or not c′ is reasonable relative to past successes is independent of how successful c′ will be in the future (ibid., 3-4). Carnap attempts to clarify this claim by suggesting that the decision to adopt a particular confirmation function can be split into two separate steps. First, rather than choosing a particular confirmation function explicitly, X merely chooses a single value of a degree of confirmation for a specific case. For example, using Lenz's example above, on the basis of the evidence E that 80 black and 20 white balls have been drawn from the urn, X may decide to assign a degree of confirmation value .75 to the hypothesis H that the next ball drawn from the urn will be black. Importantly, Carnap then argues that insofar as X makes this decision, they are not committed to "the assertion that the frequency will actually be .75" or that X's choice is "reasonable only if X believes this prediction" (ibid., 4-5). The value .75, Carnap argues, is only X's estimate and X needn't modify their beliefs to correspond with estimation values.280 Nor in assigning an estimate of .75 to H does X, claims Carnap, commit themselves to the claim that .75 must be "near" the actual frequency of black balls in the urn. Instead, the assignment of .75 as an estimation of H only express a practical decision of X, more specifically, a conditional decision, viz., his willingness to bet in the present situation, on the basis of the observational evidence E available to him, on the hypothesis H with the betting quotient 3/4 (i.e., with odds of three to one). (ibid., 5-6) The second step, then, is to make the decision to choose a particular confirmation function. Suppose X adopts the following two constraints: (i) the chosen function, cλ , should be in the λ-system and (ii) cλ(H,E) = .75 (ibid., 6). Then, using equation 4.6 on page 111, it is easy to see with a bit of algebra that λ = 20; thus, X has made the decision to choose the function cλ=20. Consequently, Carnap argues that the "decisive point" is the following: [T]he second choice is of essentially the same nature as the first. The difference is only this that the second choice is general, while the first applies to a single case. In the first step X shows one value of degree of confirmation or of the estimate about frequency for one given case. Now X has chosen a general rule, which determines not one value but values for all 280 If families in Vancouver are estimated to have 2.33 children, typically one doesn't infer that the family next door has 2 and 1/3 units worth of children. 166 5.5. Rationalizing Decision Theory and Justifying Inductive Logic possible situations. The first choice of X indicated his willingness for a certain bet in a certain given case. The present choice indicates the general willingness of X is to make bets on any hypothesis h on the basis of any evidence e then available of such a kind that the betting quotient is determined by c20(h, e) [. . . ]. (ibid., 7; underlining in original) Neither the first nor second decision, reiterates Carnap, implies any "factual prediction" concerning the future. Instead, both decisions are "purely volitional, a general willingness to make any bets in the future provided they fulfill certain conditions with respect to the betting quotient" (ibid., 8). The practical decision to adopt a confirmation function which gives us the correct estimation values may be influenced by past theoretical observations. The chosen function provides a rule with which future estimates can be given – it is in this sense, then, that the choice of a confirmation function cλ is not arbitrary. The decision to adopt an estimation function as a guide for making decisions is itself a purely volitional, practical decision. Carnap's manuscript was received with mixed reviews. On October 29, Hempel writes to Carnap that he had just received the carbons of the Carnap's "Reply to Lenz" manuscript and that while he notes the document is "very helpful indeed" he nevertheless suggests that the manuscript does not broach the issue Lenz "meant to raise": Namely: while the choice of a c-function, to be sure, need not rest on any empirical assumption, the problem remains whether the justification of such a choice as adequate would require reference to empirical fact. In the case of your analogy in CIM, p. 55, for example, one might say that the choice of a saw made of a special kind of steel required for its justification reference to (the purpose to be served and) some general empirical statement as to its hardness or other relevant characteristics of the steel in question. I wonder whether you would not add a word about that question, because otherwise the reader might feel that your reply side steps a crucial issue at stake. (Hempel to Carnap, October 29 1956; CH 11-02-12) Hempel, in essence, raises the central question we have been discussing throughout this chapter: why isn't the adequacy of an inductive logic an empirical question for Carnap? Indeed, just as the success of a saw qua instrument can be measured by how sharp and sturdy it is, why shouldn't our inductive measurements be measured in pragmatic terms of empirical success? The day after Hempel writes to Carnap, he sent a letter to Lenz with the following observation which I quote in full: [. . . ] I would like to say one thing: Your problem seems to me to arise, not in the context of adopting a certain c-function, but rather when you ask how to justify the choice of the particular function that has been adopted. And now the answer will depend on what standards of justification are used. And it seems to me that one might well make goodperformance-in-the-past one essential condition for an adequate c-function, without however 167 5.5. Rationalizing Decision Theory and Justifying Inductive Logic committing oneself explicitly to any specific expectation as to continued good performance in the future. In other words, one might adopt the view that a given c-function is adequate only if, in addition to being simple etc., it has performed well for the past. To be sure, when someone adopts this criterion, you might look for a rationale behind it and might impute to its proponent a belief in the uniformity of nature or some similar synthetic principle; but even if such a belief were present, and even if it motivated to some extent the proponent's choice of his standards of adequacy for a c-function, it still would not make such belief a systematic presupposition of induction (any more than Kepler's astrological beliefs were systematic presuppositions of his theory of planetary motion, even though they motivated him in his search for such a theory). (Hempel to Lenz; October 30, 1956; Hempel Archives [no number]) Hempel's move here, especially with regard to the Kepler remark, resembles Reichenbach's distinction between the context of discovery and justification. Even if a scientist does commit themselves to a metaphysical principle about the uniformity of nature, from the standpoint of justification we need only adopt a criterion to examine their inductive claims based on past successes. What is important, however, is that while Hempel draws this distinction, Carnap does not. The choice of a confirmation function, for Carnap, is a practical matter and if Carnap were to adopt Hempel's criterion, that would shift the debate to a theoretical question concerning the degree to which particular inductive methods have the empirical property of "good-performance-in-the-past." Indeed, when he finally writes back to Feigl in January 1957, Carnap replies to a complaint made by Feigl that Carnap's Lenz manuscript, just like §41F inLFP, is not only vague but contains a vicious circle by saying that there is no "vicious circle in the strict technical sense" (p. 2). Instead, Carnap suggests, referring to the seemingly circularity of his own explanation, that "most philosophical clarifications have this nature" of non-vicious circularity and then remarks: You are right in saying that in the written Reply I do not reveal frankly what I have up my sleeve. That could much better be done in a talk in an informal way than in such paper, I think. For me, certain things are quite clear and do no longer contain any difficulty for me; while other things are still full of unsolved problems. To be specific, for me it is quite clear and beyond doubt that the statements of inductive logic must be analytic and that their truth does not presuppose a factual assumption and that nevertheless they are useful for everyday life and science. On the other hand, the question how far we can go in a justification seems to me still quite open. (Carnap to Feigl; January, 19, 1957. CH 11-02-07, p. 2; my emphasis, underlining in original) Spurred on by Hempel's suggestion that Carnap should attempt to clarify his views, in the spring of 1957 Carnap finds the time to substantially extend his earlier "Reply to Lenz" into a manuscript he titles "How can induction be justified?" (RC 082-07-01). In May 1957, Carnap 168 5.5. Rationalizing Decision Theory and Justifying Inductive Logic sends a letter, titled "Dear friends" to at least Hempel and Feigl asking whether this new manuscript is worth publishing. Carnap writes,281 I find it hard to compare my position with those of others, because the explanations of others are not clear in certain points which seem to me essential. For example, not even Salmon's paper, which is perhaps the best and most explicit available now on this problem, makes clear whether he is thinking of the justification of an infinite class of methods or of that of one method. (I presume that he actually means the first although he often refers to "the inductive method".) Further, I do not see whether Salmon and Feigl, etc., by their "vindication" mean a reasoning which includes inductive arguments. I have not the aim of justifying inductive reasoning on a non-inductive basis, but the much more modest aim of helping somebody who has already accepted inductive reasoning to make this reasoning more consistent and systematic. I believe the first aim is unobtainable, but the second is obtainable and useful. Now does this make me a warrantist or an anti-warrantist? I find both sides of the controversy certain points of agreement and other points of disagreement. I disagree with the view, apparently held by both sides, that the purpose of inductive reasoning is to supply factual conclusions. I think that the purpose is rather to supply inductive values, e.g. inductive probabilities or estimates. (Carnap to Hempel; May 27, 1957; CH 11-02-05; my emphasis) What is clear is that Carnap and Hempel have two different justificatory projects in mind: whereas Hempel is concerned with justifying a particular inductive rule based on its past success, Carnap is worried about supplying confirmation values which will, in his own words, help those "who have already accepted inductive reasoning to make this reasoning more consistent and systematic." Moreover, as Carnap clarifies in both his letters to Feigl and Hempel, empiricist worries aside, analytic statements of probability can serve as a guide in life. In the extended May 1957 manuscript (i.e. RC 082-07-01), Carnap writes a new introduction to the original "A Reply to Lenz" document in which he discusses the division of the problem of induction into two separate parts. First, there is the "justification of the axioms of inductive logic" and second "the justification of the choice of a single inductive method among those satisfying the axioms" (p. 1). Carnap then argues there are four kinds of arguments "which are legitimate and necessary for the justifications" of either part: (i) "deductive reasoning," (ii) "inductive reasoning," (iii) "past experiences" and (iv) the "assumption of a universal synthetic principle" (ibid., 1). However, while the fourth kind of argument is "inadmissible and unnecessary," Carnap argues that the first two arguments are both necessary and sufficient conditions required for both parts of the justification of induction but that appeals to past experience in order to justify induction – despite being "admissible" – are "in principle unnecessary" (ibid., 281 The Salmon paper Carnap references is most likely Salmon (1957), or a manuscript of that paper. 169 5.5. Rationalizing Decision Theory and Justifying Inductive Logic 1). What is of interest to us, however, is Carnap's claim that although using inductive reasoning to justify either deductive or inductive reasoning itself may be circular, it is not a vicious circularity "because the aim is not demonstration on a tabula rasa, but only clarification and systematization of already existing inductive reasoning" (underlining in original; ibid., 1). Explicitly, Carnap understands the problem of induction not in its traditional sense, i.e., a problem which demands a metaphysical explanation, but rather in a "more modest sense"; namely, "[i]n terms of the distinction made by Feigl, this aim is not the validation but the vindication of induction" (ibid., 2; Carnap cites Feigl 1950). By vindication, Carnap means "a method or policy is given by showing that its use is suitable for a given end" (ibid., 2). Thus, inductive methods can be compared and contrasted by "showing that one gives better promise for reaching the goal than the other" (ibid., 3). The problem of induction then, for Carnap, is split into two separate vindications relative to the same end: first the vindication of a class of adequate inductive methods (ibid., 4-8) and, second, the vindication of a particular method from that class (ibid., 9-13). More specifically, Carnap understands the first problem as one concerned with giving reasons to include or exclude axioms for defining a system of inductive logic. Of course, this process of double-vindication has to start someplace. As Carnap clarifies, In other fields of knowledge, e.g., elementary deductive logic, arithmetic, geometry, and physics, first a systematic theory was developed, and only much later could fruitful and effective attempts at a logical and methodological analysis and justification be made. It is hardly useful to debate in general terms the question what kind of reasons should be demanded or permitted in the justification of induction or, more specifically, of inductive axioms, without actually presenting such axioms. (ibid., 5) Carnap goes on to say that, in a forthcoming article, he plans to provide axioms for inductive logic and that he "shall try and justify this system" by providing, for each axiom, "the best reasons of which I am aware at present" (ibid., 5). Carnap never wrote this paper but its connection with this problem of applicability of probability to decision making is clear: it may be the case, says Carnap, that "compelling reasons for the acceptance of the axiom in question can be given by showing that all inductive methods excluded by the axiom are unsuited for the purpose of inductive reasoning, viz. to guide our practical decisions as to increase our chances of gain" (ibid., 5). However, again drawing an analogy to deductive logic, Carnap says that these reasons cannot be made independent of inductive reasoning itself but rather all we can do is to help X clarify, systematize and develop his ways of inductive reasoning, by reducing every complex inductive problem to a great number of simple ones. (ibid., 6-7) 170 5.5. Rationalizing Decision Theory and Justifying Inductive Logic This reduction of a complex inductive problem to "a great number of simple ones" is analogous to the kind of engineering task of transforming an "ill-structured" problem into a "well-structured" problem: once reformulated as a series of simple, tractable problems answerable, perhaps, by appealing to any number of theorems provable in a pure inductive logic, there now exists a welldefined method, or even an algorithm, for the original problem of finding an adequate inductive logic. Carnap does not characterize the problem of finding an adequate inductive logic in terms of searching for and then justifying or grounding some set of independent inductive axioms in absolute terms. Instead he appeals to a kind of satisficing: we just need to locate a number of conditions of adequacy that are "good enough": namely, conditions which are based on the prior experience of inductive scientists and logicians. Just as "we cannot demonstrate the validity of an inductive axiom to an inductively blind man" for the case of inductive logic, Carnap says that "[w]e appeal to X's intuitive judgment only in order to obtain his assent to the validity of certain inductive relations, e.g., that under certain conditions a certain value of c might be higher than another" (ibid., 7). For example, Carnap argues that really Reichenbach's justification of induction is just the justification of the axiom of convergence: "[w]e must appeal," says Carnap, "to X's inductive insight to agree that a self-corrective method is preferable to a non-self-corrective one because its use gives in the long run a better chance of fulfilling our aim of diminishing the errors of our estimations" (ibid., 7-8). The second part of the problem of induction, Carnap argues, is not a problem of adopting or rejecting inductive axioms. Even in the best of scenarios, we will still need to choose one inductive method from an infinite number of methods (ibid., 9). Instead, a parameterization of an inductive system is what is required; however, the values for these parameters, if there are several, may be somewhat subjective (although, Carnap says it would be a "happy development" to find "rationally justified axioms of system of inductive logic"): I have the impression that rational reasons alone cannot determine the choice completely, even if we were to admit, in addition to deductive and inductive reasoning, reference to past experience and even general principles; I think that certain subjective factors must influence the choice. (ibid., 9-10) Subjective factors, suggests Carnap, understood in terms of inductive expertise or "insight" are an indelible part of inductive reasoning – much more, it seems, than in deductive reasoning. 171 5.6. The Aim of Inductive Logic and Robot Epistemology Thus, the choice of a system of inductive logic may ultimately always rest on a practical choice; it may always be a matter of subjective volition. Specifically, by "subjective factor" Carnap means "features of temperament or character that vary from person to person but may remain relatively stable in the course of time with the same person" (ibid., 12). Carnap then suggests that one could quantify this factor as a function of a parameter like λ, for example like a "degree of caution" or, in way analogous to utility functions, as "inductive inertia" (ibid., 12). However, Carnap instead prefers to find an inductive system where, in light of new experiences, it is not necessary to change from one inductive method to another but rather "[i]t should in principle be possible to construct a generalizable method and thereby avoid changing methods in view of [various CFF] experiences."282 Carnap never published the 1957 manuscript. Hempel, writing in June, tells Carnap that the manuscript is "too compressed, and in parts too sketchy" for those already acquainted with the debates about the justification of induction and then Hempel says that he is "inclined to think that the promissory character of your discussion of the justification of axioms of induction would detract from the value of the discussion."283 Thus, Hempel recommends that Carnap hold off on publishing the manuscript "until you have had time to write the longer piece you are contemplating." Carnap took Hempel's advice. It would be a decade later before Carnap resurrected the idea of "subjective factors" influencing inductive thinking as "inductive intuitions" in his 1968 paper "Inductive Logic and Inductive Intuition." 5.6 The Aim of Inductive Logic and Robot Epistemology Finally we are properly situated to understand the philosophical significance of Carnap's 1962 paper "The Aim of Inductive Logic" for how Carnap understands the process of "applying" inductive logic to decision theory. There, Carnap tells us that by inductive logic he means a theory of logical probability providing rules for inductive thinking. I shall try to explain the nature and purpose of inductive logic by showing how it can be used in determining rational decision. (1962, 303) As we have seen above, when he says phrases like "providing a rule for inductive thinking" Carnap has the aim of providing advice for what confirmation values may provide an agent with 282 Ibid., 11. The handwriting is difficult to decipher but "various" could be "numerous". 283 Hempel to Carnap, June 1, 1957; CH 11-02-03; RC 082-07-02 is a duplicate copy of this letter. 172 5.6. The Aim of Inductive Logic and Robot Epistemology reasonable estimation values that they may decide would be helpful in guiding their decisions. Only by making the practical decision to adopt a methodological rule, like R5 (see page 153 in section 5.4), can inductive logic – even in the guise of a qualified psychologism like rational decision theory – be used as an aid for decision making. Even when applied, inductive logic remains an instrument: it just now lives its life, so to speak, as an instrument fine-tuned to the needs of human or robot reasoners. When I say that an instrument is "fine-tuned" I have in mind that section of the 1962 paper where Carnap proposes we study the consequences of applied inductive logics for idealized agents and then "test" the "reasonableness" of these logics in hypothetical worlds. All of this takes place, for Carnap in the 1962 paper, in rational decision theory – shoulder to shoulder with empirical decision theory on the one side and pure inductive logic on the other. Savage's SEUT provides us with the immediate historical context for how Carnap understands the field of empirical decision theory. According to Carnap, empirical decision theory assumes a personal interpretation of probability; namely, the degree of credence Cr in the event H for person X at time T , i.e., CrX,T (H) with the corresponding conditional credence, CrX,T (H|E), where CrX,T (E) > 0.284 Then for a finite set of possible actions (Ai), the finite possible states of the world accessible to X (Sj), all the possible outcomes for each combination of actions and states of the world (Oi,j) and finally X's utility function defined over outcomes (U), X can calculate their subjective expected utility for action Am as: VX,T (Am) = ∑ n UX(Om,n) × CrX,T (Sn). X should do that action, then, which maximizes their expected subjective utilities over all Ai. Rational decision theory, on the other hand, replaces this psychologically descriptive credence function with a "quasi-psychological" concept of a rational credence function (i.e., Crn for some time n) which "is to be understood as the credence function of a completely rational person X; that is, of course, not any real person, but an imaginary, idealized person" (1962, 307). It is within the context of rational decision theory that Carnap then introduces the fiction of designing a rational credence function for not only an idealized person, but a robot with perfect 284 Carnap never adopts only a personalist or subjective meaning of probability: just like in Carnap (1945b), Carnap remains a pluralist about the meaning of probabilities – but in the 1960s he is a pluralist about "objective" and "subjectivist or personalist" meanings of probability, see Carnap, 1980, 118–119. 173 5.6. The Aim of Inductive Logic and Robot Epistemology memory and mathematical abilities.285 It is with such a fiction in mind that Carnap articulates four "requirements of rationality" with which to define any rational credence function. These requirements are reproduced below (see Carnap 1962, pp. 307–313):286 R1. In order to be rational, Cr must be coherent. R2. In order to be rational, a credence function must be strictly coherent. R3. (a) The transformation of Crn into Crn+1 depends only on the proposition E. (b) More specifically, Crn+1 is determined by Crn and E as follows: for any H, Crn+1(H) = Crn(E∩H) Crn(E) . R4. Requirement of symmetry. Let ai and aj be two distinct individuals. Let H and H ′ be two propositions such that H ′ results from H by taking aj for ai and vice versa. Then Cr0 must be such that Cr0(H) = Cr0(H ′). All four of these norms are characteristically Bayesian. In particular, R1 and R2 are synchronic norms: at any given time, the rational credence function must not only be coherent, but also regular, i.e., strictly coherent. R3 is a diachronic norm: relative to the agent's background information at time n (Kn) and the choice of an initial (conditional) credence function (Cr′0), the rational credence Crn at time n is defined as Cr′0(H|Kn) using repeated applications of the transformation rule, i.e., normal conditionalization, from R3(a). Defining the initial conditional credence function as a "credibility" function (CredX), Carnap next defines the rational expected utility function, VX,T (Am) = ∑ n UX(Om,n) × CredX(Sn|KX,T ). Finally, the last requirement is none other than that credence functions assign the same probabilities to exchangeable sequences.287 It will be helpful to have a bit more detail about this move from empirical to rational decision theory. Specifically, Carnap suggests that one of the more important features of his work is the transition from credence and credibility functions: 285 As Carnap further clarifies: "[. . . ] since our goal is not the psychology of actual human behavior in the field of inductive reasoning, but rather inductive logic as a system of rules, we do not aim at realism. We may make the further idealization that X is not only perfectly rational but has also an infallible memory. Our assumptions deviate from reality very much if the observer and agent is a natural human being, but not so much if we think of X as a robot with organs of perception, memory, data processing, decision making, and acting. Thinking about the design of a robot will help us in finding rules of rationality. Once found, these rules can be applied not only in the construction of a robot but also in advising human beings in their effort to make their decisions as rational as their limited abilities permit" (1962a, 309). Carnap (1971b) is a slightly updated version of Carnap (1962a) and the passages between the two are nearly identical save for this paragraph. In the 1971 version, the sentence second from the bottom reads: "Thinking about the design of a robot might help us in finding rules of rationality" (1971b, 17; my boldface). 286 By "individual" in R4 Carnap means an element belonging to a domain of discourse or to a single element from a statistical population (313). 287 See the last two papers in Zabell (2005) or Skyrms (2012). 174 5.6. The Aim of Inductive Logic and Robot Epistemology While CrX,T characterizes the momentary state of X at time T with respect to his beliefs, his function CredX is a trait of his underlying permanent intellectual character, namely his permanent disposition for forming beliefs on the basis of his observations. (1962, 311). Indeed, while Carnap argues that the credence functions proposed by Ramsey and de Finetti which are only intended to represent adult credences may be of limited merit, Carnap argues that the concepts of rational decision theory have "great methodological advantages" as Only for these concepts, not for credence, can we find a sufficient number of requirements of rationality as a basis for the construction of the system of inductive logic. (1962, 312) The implication seems to be that empirical decision theory as a scientific field is far too much in its early stages for one to expect the kind of mathematical idealization found, for example, in the axiomatic transformations from physical to mathematical geometry and back again. More explicitly, exactly here Carnap discusses his views about the development of concept formation: If we look at the development of theories and concepts in various branches of science, we find frequently that it was possible to arrive at powerful laws of great generality only when the development of concepts, beginning with directly observable properties, had progressed step by step to more abstract concepts, connected only indirectly with observables. Thus physics proceeds from concepts describing visible motion of bodies to the concept of momentary electric force, and then to the still more abstract concept of a permanent electric field. In the sphere of human action we have first concepts describing overt behavior, say of a boy who is offered the choice of an apple or ice cream cone and takes the latter; then we introduce the concept of an underlying momentary inclination, in this case the momentary preference of ice cream over apple; and finally we form the abstract concept of an underlying permanent disposition, in our example the general utility function of the boy. What I propose to do is simply to take the same step from momentary inclination to the permanent disposition for forming momentary inclinations also with the second concept occurring in the decision principle, namely, personal probability or degree of belief. This is the step from credence to credibility. (my emphasis; 312) The similarity between Ramsey's own analogizing of a degree of belief with dispositional concepts from physics (see section 4.2) and Carnap's own analogy between decision theoretic and physical concepts is striking. Indeed, by objectifying the concepts from empirical decision theory and by placing them within the domain of rational decision theory, Carnap can now study the inductive "dispositions" of idealized or robotic agents.288 Finally, we turn to the transition from rational decision theory to inductive logic. This step is straightforward: an inductive logic is to be constructed based on the constraints provided by 288 Indeed, Carnap says later on: "[. . . ] if we judge the rationality of person's beliefs, we should not simply look at his present beliefs. Beliefs without knowledge of the evidence out of which they arose tell us little. We must rather study the way in which the person forms his beliefs on the basis of evidence. In other words, we should study his credibility function, not simply his present credence function" (312). 175 5.6. The Aim of Inductive Logic and Robot Epistemology the above requirements of rationality. For example, one would adopt an inductive logic with measure functions M corresponding to coherent and strictly coherent initial credence functions and likewise adopt those confirmation functions C corresponding to conditional (strict) coherent and credibility functions. These admissible M and C -functions will then be further restricted, for example, with the addition of axioms like an the axiom of symmetry which is the logical analogue of the requirement of symmetry, R4, above (1962, 314 316). It is in this way that Carnap shows how the practical decisions involved with constructing an inductive logic can be informed by theoretical reasons, e.g., reasons from empirical decision theory. In moving from empirical to rational decision theory, and then from rational decision theory to inductive logic, Carnap is able to locate empirical and conceptual constraints for constraining the construction of an inductive logic such that it is suitable to the needs of those currently working on rational and then empirical decision theory. Crucially, Carnap relegates the role that these normative and empirical reasons have to the methodology of inductive logic, not to the inductive logic qua logic. Indeed, Carnap himself says in the 1962 paper, which I would suggest is the central passage in the paper: While the axioms of inductive logic themselves are formulated in purely logical terms and do not refer to any contingent matters of fact, the reasons for our choice of the axioms are not purely logical. [. . . ] Thus, in order to give my reasons for the axiom [of symmetry], I move from pure logic to the context of decision theory and speak about beliefs, actions, possible losses, and the like. However, this is not in the field of empirical, but of rational decision theory. Therefore, in giving my reasons, I do not refer to particular empirical results concerning practical agents or particular states of nature and the like. Rather, I refer to conceivable series of observations by X, to conceivable sets of possible acts, of possible states of nature, of possible outcomes of the acts, and the like. These features are characteristic for an analysis of reasonableness of a given function Cr0, in contrast to an investigation of the successfulness of the (initial or later) credence function of a given person in the real world. Success depends upon the particular contingent circumstances, rationality does not. (315; emphasis in original, boldface is mine) The finding of a rational credibility (or credence) function can be understood in terms of the success of an inductive logic which has been applied using a particular set of resources; namely, the "success" of hypothetical agents working in a hypothetical world – like the possible state descriptions expressible in an object language – can be unambiguously formulated as a problem of applied logic.289 This is Carnap's solution to Hume's problem of induction: he provides us 289 As Carnap puts the point to Burks in his Schilpp volume, "[. . . ] I believe that questions of rationality are purely a priori" (Carnap, 1963b, 982). But here the "a priori" itself is, for Carnap, a notion that is best 176 5.6. The Aim of Inductive Logic and Robot Epistemology with a set of conceptual resources formulated in semantics and logical syntax and then uses these resources to explain when the credibility or credence functions derived from an inductive logic are rational, when they are reasonable. It is precisely in the above quoted paragraph that the strands I have been tracking throughout this chapter – including Ramsey's decision theory, Feigl and Reichenbach's pragmatic justification of induction, Savage's normative decision theory and Carnap's reply to Lenz – converge on a single point. It is Ramsey's 1929 note which allows Carnap to understand Ramsey's decision theory – complete with Ramsey's analogies between physical and psychological dispositions – to be a fictional theory for idealized persons, persons with inductive "dispositions." Later, Carnap can appeal to Savage's own normative decision theory as means for testing the logical consistency of idealized agents.290 Unlike the pragmatism of Ramsey, Feigl and Reichenbach, however, Carnap need not equate the notion of reasonableness with that of (empirical) successfulness. This is how I would describe Carnap's solution to Hume's problem in engineering terms. The construction and design of a robot, an idealized agent, within the context of rational decision theory is an engineering project. The medium in which the conceptual engineer works, however, is not the physical world of electric circuits and hammers but the world according to a logical system which can be modified and tested using the instruments of logical syntax and semantics. The conceptual engineer constructs different requirements of rationality within this inductive logic. Empirical decision theory places constraints on what requirements of rationality, and thusly what kind of robot epistemology, will be useful in the empirical sciences while the formalizing of these requirements in an applied inductive logic means that the engineer will have to construct a pure inductive logic which is adequate for the task of designing such a robot. For Carnap, rational, or normative, decision theory acts as a kind of idealized buffer separating empirical decision theory from inductive logic. Understood as a kind of qualified understood relative to some logical framework – no "universal synthetic presuppositions," says Carnap, "[...] are necessary in order to show that a given inductive method is rational" (982). 290 However, Savage seems to have clung onto the empirical interpretation more than the normative. In a letter to Carnap about the 1962 paper, Savage writes that "if I were going to build a robot, I would build him with my own present credence" and later on remarks "Do not the concepts of individuals, and names of individuals, imply at least some rudiments of empirical knowledge? Even if the situation is not so bad here as I suspect, would it not be bad indeed when it comes to attributes? Incidentally, at top of page 316, you refer to semantical properties of attributes. I am not well informed about the technical meanings of the "semantical", but is it really possible to discuss semantical properties without considerable foundation in empirical experience?" (my emphasis; Savage to Carnap, November 15, 1963; RC 084-52-08). 177 5.7. Conclusion psychologism, rational decision theory allows Carnap to investigate all the possible ways in which idealized agents or robots can produce credence or credibility values for all possible conceivable states of the world relative to any exchangeable sequence of the agent's total available evidence (all formulated within a particular logical system). This entire project resembles the kind of hierarchy of interconnecting design and construction problems we encountered in chapter 3. Here the operational principle is to design a robot epistemology, based on reasonable rules of rationality for inductive thinking, which may eventually be used to help guide the decision making of actual persons. Applied inductive logic provides us with the formation of a wellstructured problem from the ill-structured problem that is Hume's problem of induction. But Carnap provides us with no guarantee that a completely adequate, or the optimally reasonable, robot epistemology will be found; again, this is a process of satisficing rather than searching for the truth. 5.7 Conclusion In this chapter we have seen three cases where the notion of reasonableness is cashed out in terms of empirical success: (i) Ramsey appeals to C. S. Peirce's pragmatism, (ii) Feigl and Reichenbach's pragmatic justification, or vindication, of a principle of induction based on the necessary condition that our world is predictable and, finally, (iii) Lenz's argument in his 1956 paper that the performance of an inductive logic, if it is to be a guide in life, clandestinely rests on some synthetic principle. In each case, Carnap would claim that only an analytic statement of an estimation of an observed relative frequency will serve as a guide for decision making. Of course, there are at least as many ways of writing down an estimation function as there are confirmation functions in Carnap's λ-system. Thus the need for rational, or normative, decision theory: it provides Carnap with the conceptual space required for the engineering, designing and constructing of quasi-psychological concepts while remaining sensitive to the empirical sciences, especially empirical decision theory. Thus we have the solution to the original problem we started with. Carnap's transition from empirical decision theory, to normative decision theory and finally to inductive logic provides a case study for how an inductive logic could be "designed" for particular scientific ends without 178 5.7. Conclusion inductive logic becoming an empirical investigation. In particular, the "end" I have in mind is Carnap's "step from credence to credibility," i.e., his attempt to clarify rational decision theory as a means to someday discover a more satisfactory empirical theory of decision making under uncertainty.291 Moreover, as we have seen in the previous chapter and this one, Carnap does not adopt a prescriptive account of inductive logic. Indeed, although Carnap may talk about rational or normative decision theory, the concepts "normative" and "rational" – no less than the concepts of "credence" and "credibility" – are theoretical concepts presumably belonging to empirical theories about decision making, including behavioral economics. In his work on decision theory, Carnap is not backtracking on the claim that traditional epistemology is an unclear mixture of logical and empirical features. Consequently, this chapter provides us with yet another example for how the interplay between the practical and theoretical in Carnap's work on decision theory need not mark a tension in Carnapian logic of science: the 'dialectical' relation between both normative and empirical decision theory and the artificial language of inductive logic is nothing but the methodological investigation of how to apply a pure logic to some scientific domain – it is no more complicated than understanding the distinction between mathematical and physical geometry. In a sense, what Carnap is doing is in principle no different from how scientists make mathematical models. For example, Carnap's work on normative decision theory is arguably akin to how economists construct and apply mathematical models, viz. models which abstract and idealize actual economizing agents.292 Carnap's talk of explications is talk of empirical conceptual formation using something like model-based thinking in order to show how vague scientific concepts can be made systematic and exact. (And if this is right, then Carnapian logic of science is no more at tension with "naturalism" or the history of science than when mathematical models are used to help explain and investigate the structure of science – if there is disagreement at all, it is disagreement over when such conceptual technologies are, or are not, appropriate.) 291 However, in the 1970s empirical decision theory takes a different route with the discovery of prospect theory which takes the place of marginal utility theory, see Erickson et al. (2013); Kahneman and Tversky (1979). 292 For example, see Morgan (2012). 179 Chapter 6 Conclusion I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. - Abraham H. Maslow, The Psychology of Science: A Reconnaissance, (1966) O False and treacherous Probability, Enemy of truth, and friend to wickednesse; With whose bleare eyes Opinion learnes to see, Truth's feeble party here, and barrennesse. - Sir Fulke Greville (a.k.a. Lord Brooke), Caelica (1580-1600) So how did the twentieth century philosopher Rudolf Carnap attempt to clarify and even resolve foundational questions in the sciences by reformulating those questions in an artificial, logical language?293 In chapters 4 and 5, I detailed how Carnap showed how it was possible to both construct inductive logics based on an explication of a logical meaning of probability and then apply these logical systems to the empirical sciences, especially theoretical statistics and decision theory. In chapter 4, I examined how Carnap constructed an inductive logic using semantical concepts and how he then attempted to study a continuum of inductive methods by constructing a parameterization of confirmation function in terms of how much weight, expressed as λ, each inductive method gives to a logical as opposed to an empirical factor. We then examined how Carnap not only expressed interest in clarifying the entire foundations for statistics by defining all inductive concepts on the basis of a single degree of confirmation (viz. as a "well-connected system of concepts," see Fig. 4.6 on page 127) but also how "optimal" estimation functions, each of which is based on a confirmation function, can be investigated using the λ-system relative to a particular state of the universe (relative to some logical framework). In chapter 5, I provided a rather long historical narrative tracking the development of decision 293 The quote from Greville is cited, without attribution, by John Maynard Keynes at the very end of the index to Keynes (1921). 180 Chapter 6. Conclusion theory and various solutions to Hume's problem of induction in order to provide the context for how Carnap applied his work on pure inductive logic to empirical and rational decision theory. In particular, I explained how Carnap sought to find adequate requirements of rationality by designing a robot, or idealized agent, as a part of rational decision theory. This was a project which not only remained sensitive to the results of the empirical sciences (namely: empirical decision theory) but it laid the groundwork for what sort of pure inductive logics should be constructed so that they can be applied in rational decision theory. The result was the construction of applied inductive logic as an instrument, a tool which could be evaluated as more or less reasonable depending not necessarily on its empirical success but on how well the probability values it supplies to idealized agents work out in hypothetical situations. It is in this sense that logical probability, for Carnap, can be a guide in life. Carnap's work on pure and applied inductive logic is the last installment, and I would also suggest is his most worked-out example, of his Wissenschaftslogik, his logic of science. In chapter 2, I summarized how several Carnap scholars have attempted to explain the philosophical significance of Carnap's technical projects in the face of criticism from mainstream philosophers like Quine by analogizing Carnap's projects to a kind of linguistic or conceptual engineering. For Richard Creath the wissenschaftslogiker was in charge of constructing logical tools which scientists could then freely pick to use in their work. Carus and Stein, however, emphasized a certain 'dialectical' relation between the artificial languages being constructed and the natural language used to construct artificial languages – this signified a blurring of the practical and theoretical dear to Carnap. Now, according to Carus, we must talk about an ideal of explication. Hillier, by contrast, endorsed a conception of Wissenschaftslogik as a kind of linguistic engineering according to which logical frameworks are models whose adequacy are measured in terms of a certain "fit" with the world. In chapter 3, I articulated a hierarchical conception of engineering design by examining several case studies from the history of engineering. From the history of aeronautical engineering I examined a case, that of designing aircraft for both control and stability, which focused on how engineering solutions are sometimes indelibly tied up with the subjective preferences and experiences of humans; this is a tangle of the practical and theoretical. I then examined a case study about the design of propellers. There we found that it is typically not feasible to think of 181 Chapter 6. Conclusion engineering in terms of finding optimal solutions to problems; instead engineers satisfice: they frequently must settle with finding "good enough" solutions to problems, even when they are trying to generate empirical generalizations. Lastly, I examined a case study from automotive engineering about the history of anti-locking brake systems. There we found that the path from a engineering problem to its solution is not always simple: one must constantly re-define how to formulate a solution as our measures of success, technologies and technical know-how change. This is an example of how engineering is a process: it begins with an "ill-structured problem" and engineers must figure out how to reformulate and transform that problem into a "well-structured problem." These elements of engineering design – the blending of practical and theoretical considerations, satisficing instead of searching for the truth and figuring out how to transform an ill-structured problem into a well-structured, but not necessarily "correct," problem – capture the essential elements of Carnapian logic of science. After fixing the interpretation for the skeleton of a logical system, L, as a coarse-grained decision, the conceptual engineer can make any number of fine-grained decisions, like modifying the value of λ, depending on the subjective preferences of an agent or the expected fruitfulness of applying this logic to the sciences. Logic provides the instruments for carrying out investigations like finding an "optimal" value of λ for use in estimation theory, for example. Nevertheless, as Carnap attempts to construct a "wellconnected" system of inductive concepts by, first, locating an adequate confirmation function and, second, using this concept to define all other inductive concepts – like concepts of estimation, information and entropy – he provides no guarantee that such a confirmation function will ever be perfectly adequate to construct such a web of inductive concepts. Instead of interpreting Carnap as finding the "correct" inductive logic in terms of finding the adequate confirmation function c which somehow gets closer to the truth, I suggest it will be more fruitful to understand Carnap's search for an adequate c in terms of satisficing. For example, Carnap writes to Kuhn in the early 1960s after reading a manuscript of Structure of Scientific Revolutions which was to be soon published in Carnap and Otto Neurath's journal International Encyclopedia of Unified Science (see Reisch, 1991). "I am convinced that your ideas will be very stimulating for all those who are interested in the nature of scientific theories and especially the cause of forms of their changes," Carnap writes to Kuhn in 1962, 182 Chapter 6. Conclusion I found very illuminating the parallel you draw with Darwinian evolution: just as Darwin gave up the earlier idea that evolution was directed towards a predetermined goal, men as the perfect organism, and saw it as a process of improvement by natural selection, you emphasize that the development of theories is not directed toward the perfect true theory, but is a process of improvement of an instrument. In my own work on inductive logic in recent years I have come to a similar idea: that my work and that of a few friends in the step for step solution of problems should not be regarded as leading to "the ideal system", but rather as a step for step improvement of an instrument. Before I read your manuscript I would not have put it in just those words. But your formulations and clarifications by examples and also your analogy with Darwin's theory helped me to see clearer what I had in mind. (April 28, Carnap to Kuhn 1962; published in Reisch 1991, 267) Carnap is being modest: we saw in chapter 4 that he had treated inductive logic as an instrument ten years earlier in The Continuum of Inductive Methods (CIM ).294 Carnap's suggestion that his own work on inductive does not lead to an "ideal system" but is rather similar to a step-by-step process which resembles Darwin's theory of evolution by natural selection is best captured, I would suggest, as a kind of satisficing.295 The notion of conceptual engineering I articulated at the end of chapter 3 is in basic agreement with the views of Creath, Richardson, Friedman, Carus, Stein and Hillier – we would all agree, I think, with the claim that Carnapian Wissenschaftslogik can be fruitfully thought of as a kind of linguistic or conceptual engineering activity. Nevertheless, some of us would disagree about the details of this claim. In particular, I have shown that the sort of blending between the practical and theoretical that Carus and Stein locate in Carnap's talk of explication – a blending which they argue merits a revision or reconception of Carnap's method of explication – is explicitly recognized by Carnap when he applies his work on inductive logic to the foundations of theoretical statistics and decision theory. But Carnap recognizes this blending at the level of the methodology of inductive logic – at the level of deciding how to set up our logic so that it will be a satisfactory instrument. And the decision to adopt an inductive logic needn't last forever: we are always free to replace our current logical concepts, like our current requirements of rationality, with newly designed logical constructs. Lastly, the historical episodes we encountered in chapter 5 – Carnap's interpretation of Ramsey's decision theory, Carnap's reply to Lenz and how Carnap measures the success of rational decision theory in terms of reasonableness and not empirical success – offer us illustrations for how Carnap can talk about adequate applied 294 And before that, pure semantics as an instrument in Carnap (1943); see Richardson (2013). 295 At least this is how Carnap understood the situation in the 1960s and, perhaps, after the publication of CIM. In the mid-1940s, of course, Carnap was more sanguine about the possibility of finding an adequate inductive logic based on a confirmation function like, e.g., c∗. 183 Chapter 6. Conclusion logics without cashing out adequacy in terms of empirical success. Linguistic frameworks, for Carnap, need not be understood as models whose fruitfulness or adequacy is measured by their "fit" with the empirical world. There is no first philosophy. Instead, let us clearly lay out our technical proposals for defining the concepts dearest to our hearts – let us program the structure of the world according to how we see things. Then we will undergo the intellectual labor required to compare the consequences of our proposals with the consequences of the proposals belonging to others – all the while keeping in mind what purpose these proposals are meant to serve. Do not hide away the details or leave problematic claims to what will happen "in principle." No concepts are irreplaceable or sacred. This is the attitude Carnap adopted when he explicated the logical concept of probability with inductive logic – he was in the business of designing and constructing conceptual technologies for clarifying and systematizing the foundations of science. He was a conceptual engineer. Reflections and Future Work This dissertation has several weaknesses. Most glaring, I think, is that I say relatively little about Carnap's usual interlocutors, like Quine and Gödel. Moreover, save for chapter 2, I do not talk about central themes typically discussed in the Carnap reappraisal literature, including: (i) the historical backdrop of the views on logic by Frege and Wittgenstein, the development of the Vienna Circle, the influence of Marburg neo-Kantians on the early Carnap, the development of metalogic and descriptive geometry and, finally, the influence on Carnap by scientists who articulate a conventionalist conception of science, broadly construed, like Pierre Duhem, Henri Poincaré and Albert Einstein; (ii) the implications of Carnap's principle of tolerance for the philosophy of mathematics, language and logic and the nature of the analytic/synthetic distinction or the a priori as viewed from outside of Carnap's logic of science; and (iii) both the cultural/socio-political features of the Vienna Circle (and logical empiricism) and what the legacy of logical empiricism is, or should be, for contemporary analytical metaphysics, epistemology and philosophy of mind.296 My focus has instead been on the historical relationships between philosophy and science – 296 In particular, I have not had a chance to say anything about the views of Carnap articulated by recent analytical philosophers working on "metametaphysics," like David Chalmers, Thomas Hofweber, Matti Eklund, Huwe Price or Stephen Yablo; see the articles in Blatti and Lapointe (2016); Chalmers et al. (2009). 184 Chapter 6. Conclusion between Carnap's mature work on inductive logic as a kind of conceptual engineering and the social and statistical sciences. In this sense my dissertation more naturally fits into the fields of philosophy of science and the history of philosophy of science (HOPOS) better than the history of (early) analytical philosophy. In this regard, I regret not having had a chance to say much about the views of Otto Neurath. Thomas Uebel has recently articulated a "bipartite conception of metatheory" which provides a framework to describe the project for the "science of science" jointly engaged in by both Carnap and Neurath: here there is a division of labor between those working on Carnapian logic of science and those working, along the lines of Neurath's "behavioristics of science," on the sociology of science, broadly construed.297 I think something like this view is basically right. I do not discuss much, if any, of the technical details about probability theory and inductive logic lurking in the background to my dissertation, including the mathematical relationships between Carnap's λ-system and de Finetti's representation theorems. I also don't talk at all about the various generalizations of Carnap's work on inductive logic by Jaakko Hintikka, Ilkka Niiniluoto and Theo Kuipers and the work on logical probability by Dana Scott, Peter Krauss and more recent logicians.298 (Nor, for that matter, do I talk at all about connections between inductive logic and contemporary Bayesian statistics and statistical inference).299 If I had more space I would also like to provide a broader historical overview of the field of probability and induction in the 1950s, 1960s and 1970s including not only work by Kyburg, Putnam, Salmon and Goodman but also various reactions to the formal nature of inductive logic and Hume's problem of induction by philosophers like Bertrand Russell, Mary Hesse, Imre Lakatos, Max Black, Peter Strawson, J. L. Austin, Gilbert Ryle and Stephen Toulmin. Nor do I say anything about earlier philosophical conversations about probability and induction by W. E. Johnson, C. D. Broad, A. J. Ayer and Karl Pearson – and I say very little about either the actual views of Keynes, Jeffreys and Wrinch or the rise of probabilistic thinking in the empirical sciences.300 I also would have liked to discuss in more detail the views on probability and induction held by Karl Popper and Hans Reichenbach. Not enough work has been done to explore the intellec297 See, e.g., Uebel (2007; 2012b). 298 See, for example, Niiniluoto (2009) and Demey et al. (2014); Scott and Krauss (1966), respectively. 299 See, e.g., Festa (1993); Romeijn (2009); Skyrms (2012); Sprenger (2009); Zabell (2005; 2009). 300 A good starting point is both Krüger, Daston, and Heidelberger (1990) and Krüger, Gigerenzer, and Morgan (1990). 185 Chapter 6. Conclusion tual and personal relationships between Reichenbach, Popper and Carnap – their differing views on the nature of probability and induction spurred on, in part, as reactions to the probabilistic revolutions in the empirical sciences, especially quantum and statistical mechanics in physics, are central to the entire project of philosophy of science in the twentieth century.301 Much more attention should be paid to these three philosophers of science, in tandem, by the HOPOS community. My dissertation only takes a small step toward the completion of this larger and much more philosophically significant historical and conceptual project. My original plan was to include a chapter showing how contemporary formal philosophers – especially formal epistemologists – could adopt the conceptual engineering framework to describe their own formal work. Unfortunately, I only hint at how this could actually occur in the dissertation.302 Specifically, I planned to discuss in detail Carnap's correspondence with Richard Jeffrey from the spring of 1957 to the fall of 1958. I then planned on talking about Jeffrey's later "radical probabilism" and how he also treated the probability calculus as an instrument.303 I then wanted to bring in the engineering analogy to explain the different ways in which Carnap and Jeffrey understood logic as an instrument. This was supposed to provide an example of how conceptual engineering could be done in formal epistemology – I plan to pursue this idea in future work.304 This "missing chapter" had two other components which I hope to further develop in the future. The first component was to discuss the similarities between Kuhn and Carnap's mature views and I wanted to draw on Carnap's work on inductive logic to suggest how their views did, in fact, differ in a novel way. The second component is what I could call a "forward-looking" approach to HOPOS. Instead of trying to explain how Carnap could have possibly written documents like the Aufbau or LSL by examining the philosophical, logical, mathematical and scientific developments in the late nineteenth and early twentieth centuries, I wanted to suggest that we examine Carnap's role, both historically and conceptually, in the more recent sciences 301 Far too much attention has been paid to deductive logic and the developments of metalogic in the HOPOS and philosophical communities relative to the amount of work done in these communities on induction and probability when one considers that the central foundational issues in twentieth century science after the world wars have largely arisen from the advent of statistical and probabilistic thinking in biology, physics and the social sciences – and more recently with statistical learning theory and "big data" science. 302 But perhaps the work on causal modeling at places like Carnegie Mellon University could provide an example of how such conceptual engineering could take place; e.g., see Spirtes et al. (2000). 303 See Jeffrey (1992b). 304 Part of this work is already in print; see French (2015a). 186 Chapter 6. Conclusion of information theory, operations research, complexity theory, computability theory, cybernetics and behavioral economics from the 1940s onwards. Although the historical connections are loose and scattered, conceptually speaking I think John von Neumann's notion of an applied science (including his work on self-replicating automata) and Herbert Simon's notion of a "bounded rationality" are similar in many respects to the conception of Carnap as conceptual engineer articulated in this dissertation (and here the difference between conceptual engineering as an interpretive framework and as an accurate historical description gives rise to worries about selffabricated research projects). The more well-formed question would be this: after noticing that von Neumann, Carnap and Simon all treat logic and mathematics as instruments, how are their instrumental conceptions of logic similar or dissimilar? I would say more about that question here, but then my conclusion will quickly morph into another substantial chapter. I had also originally planned to discuss Carnap's work on inductive logic against the backdrop of "voluntarist" conceptions of empiricism, like van Fraassen (2002; 2011). Engineering design, after all, brings to the forefront the volitional nature of constructing an artifact for a particular purpose. Moreover, this topic soon broaches topics about the politics and values of science – especially when we begin to think about how the foundations of science interact with the policies and economics of science, or the impact of science on society.305 I would like to return to these topics in future work: What do we learn about the relationship between philosophy and science policy from the conception of philosophy as conceptual engineering? Lastly, I would like to incorporate the lessons learned from the present dissertation to clarify and examine broader, more general, philosophical questions. Do we want to engage in conceptual engineering ourselves? Indeed, what is the purpose of formal work in philosophy and does such work belong in the humanities? What is the philosopher sympathetic to the engineering conception of philosophy to do if they themselves lack the technical and mathematical skills (or lack the motivation and will) to contribute to the "cutting-edge" topics in the philosophy of science? From a historical perspective, I would like to re-examine the work of Carnap's peers, like Imre Lakatos or Stephen Toulmin, who are skeptical of Carnap's technical projects but nonetheless remain sympathetic to a scientific conception of philosophy (and Toulmin himself discusses how the practical takes precedence over the theoretical, providing a potentially interesting re305 See, e.g., Douglas (2009; 2010; 2014a;b); Levi (1960); Rudner (1953); Steele (2012). 187 Chapter 6. Conclusion lief against Carnap's understanding of the practical and theoretical).306 From a philosophical perspective, aside from perhaps turning to now traditional avenues like the philosophy of technology,307 I would compare the reception of Carnap's technical projects to more mainstream technical projects in philosophy including, for example, clarifications of counterfactuals using possible-world semantics by philosophers like David Lewis.308 306 See Lakatos (1968); Toulmin (1972). 307 E.g., see Mitcham (1994). 308 See, e.g., Lewis (1986). 188 Bibliography Agresti, A. and X.-L. Meng (Eds.) (2013). Strength in Numbers: The Rising of Academic Statistics Departments in the U.S. Springer: New York. Allais, M. (1953). Le Comportement de L'Homme Rationnel Devant le Risque: Critique des Postulats et Axiomes de L'École Américaine. Econometrica: Journal of the Econometric Society 21, 503–546. Arrow, K. (1951). Individual Values and Social Choice. Wiley: New York. Awodey, S. and A. W. Carus (2007). Carnap's Dream: Gödel, Wittgenstein, and Logical Syntax. Synthese 159 (1), 23–45. Baird, D. (2004). Thing Knowledge: A Philosophy of Scientific Instruments. University of California: Berkeley. Bar-Hillel, Y. (1964). Language and Information: Selected Essays on Their Theory and Application. Addison-Wesley Publishing Company: Reading, MA. Bar-Hillel, Y. and R. Carnap (1953). Semantic information. The British Journal for the Philosophy of Science 4 (14), 147–157. Bartha, P. (2010). By Parallel Reasoning. Oxford University Press: Oxford. Bartha, P., J. Barker, and A. Hájek (2014). Satan, Saint Peter and Saint Petersburg. Synthese 191 (4), 629–660. Billingsley, P. (1995). Measure and Probability (3rd ed.). John Wiley & Sons: New York. Birnbaum, A. (1962). On the Foundations of Statistical Inference. Journal of the American Statistical Association 57 (298), 269–306. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer: New York. Blatti, S. and S. Lapointe (Eds.) (2016). Ontology After Carnap. Oxford University Press: Oxford. Forthcoming. Blockley, D. (2012). Engineering: A Very Short Introduction. Oxford University Press: Oxford. Bloor, D. (2011). The Enigma of the Aerofoil: Rival Theories in Aerodynamics, 1909-1930. University of Chicago Press: Chicago. Bogdan, R. J. (Ed.) (1976). Local Induction, Volume 93. D. Reidel: Dordrecht. Broad, C. D. (1918). On the Relation between Induction and Probability (Part I.). Mind 27 (108), 389–404. Burian, R. M. (1977). More than a Marriage of Convenience: On the Inextricability of History and Philosophy of Science. Philosophy of Science 44 (1), 1–42. Carnap, R. (1922). Der Raum: Ein Beitrag zur Wissenschaftslehre. Kant-Studien (56). Carnap, R. (1923). Über die Aufgabe der Physik und die Anwendung des Grundsatzes der Einfachstheit. Kant-Studien 28 (1/2), 90–107. 189 Bibliography Carnap, R. (1926). Physikalische Begriffsbildung, Volume 174. Wissenschaftliche Buchgesellschaft. Carnap, R. (1928). Der Logische Aufbau der Welt. Weltkreis Verlag: Berlin-Schlachtensee. Carnap, R. (1934). Logische Syntax der Sprache. Springer: Vienna. Carnap, R. (1936a). Testability and Meaning. Philosophy of Science 3 (4), 419–471. Carnap, R. (1936b). Wahrheit und Bewährtung. Actes du Congrès International de Philosophie Scientifique, 18–23. Carnap, R. (1937a). Testability and Meaning – Continued. Philosophy of Science 4 (1), 1–40. Carnap, R. (1937b). The Logical Syntax of Language. Trans. by A. Smeaton. Kegan P. Trench Trubner and Co. Ltd.: London. Carnap, R. (1939). Foundations of Logic and Mathematics. University of Chicago Press: Chicago. Carnap, R. (1942). Introduction to Semantics. Harvard University Press: Cambridge. Carnap, R. (1943). Formalization of Logic. Harvard University Press: Cambridge. Carnap, R. (1945a). On Inductive Logic. Philosophy of Science 12 (2), 72. Carnap, R. (1945b). The Two Concepts of Probability: The Problem of Probability. Philosophy and Phenomenological Research 5 (4), 513–532. Carnap, R. (1947a). On the Application of Inductive Logic. Philosophy and Phenomenological Research 8 (1), 133–148. Carnap, R. (1947b). Probability as a Guide in Life. Journal of Philosophy 44 (6), 141–148. Carnap, R. (1950). Empiricism, Semantics and Ontology. Revue internationale de philosophie 2, 20–40. Carnap, R. (1951). The Problem of Relations in Inductive Logic. Philosophical Studies 2 (5), 75–80. Carnap, R. (1952). The Continuum of Inductive Methods. University of Chicago Press: Chicago. Carnap, R. (1953). Inductive Logic and Science. In Proceedings of the American Academy of Arts and Sciences, Volume 80, pp. 189–197. Carnap, R. (1956). Meaning and Necessity; A Study in Semantics and Modal Logic (2nd ed.). University of Chicago Press: Chicago. Carnap, R. (1962a). The Aim of Inductive Logic. In E. Nagel, P. Suppes, and A. Tarski (Eds.), Logic, Methodology and Philosophy of Science, pp. 303–318. Stanford University Press: Stanford. Carnap, R. (1962b). The Logical Foundations of Probability (2nd ed.). University of Chicago Press: Chicago. Carnap, R. (1963a). Intellectual Autobiography. In The Philosophy of Rudolf Carnap, pp. 3–84. Open Court: La Salle. Carnap, R. (1963b). Replies and Systematic Expositions. In The Philosophy of Rudolf Carnap, pp. 859–1013. Open Court: La Salle. Carnap, R. (1966). Probability and Content Measure. In P. K. Feyerabend and G. Maxwell 190 Bibliography (Eds.), Mind, Matter and Method: Essays in Philosophy and Science in Honor of Herbert Feigl, pp. 248–260. University of Minnesota Press: Minneapolis. Carnap, R. (1968a). Inductive Logic and Inductive Intuition. In I. Lakatos (Ed.), The Problem of Inductive Logic, pp. 258–267. North Holland: Amsterdam. Carnap, R. (1968b). On Rules of Acceptance. In I. Lakatos (Ed.), The Problem of Inductive Logic, pp. 146–150. North Holland: Amsterdam. Carnap, R. (1971a). A Basic System of Inductive Logic, Part I. In R. Jeffrey and R. Carnap (Eds.), Studies in Inductive Logic and Probability, Volume 1, pp. 34–165. University of California Press: Los Angeles. Carnap, R. (1971b). Inductive Logic and Rational Decisions. In R. Jeffrey and R. Carnap (Eds.), Studies in Inductive Logic and Probability, pp. 5 – 31. University of California Press: Los Angeles. Carnap, R. (1977). Two Essays on Entropy. University of California Press: Berkeley. Carnap, R. (1980). A Basic System of Inductive Logic, Part II. In R. Jeffrey (Ed.), Studies in Inductive Logic and Probability, Volume 2, pp. 7–155. University of California Press: Los Angeles. Carnap, R. (1995). An Introduction to the Philosophy of Science. Dover Publications Inc.: New York. Carnap, R. and Y. Bar-Hillel (1952). An Outline of a Theory of Semantic Information. Technical report, MIT. Carnap, R. and W. Stegmüller (1959). Induktive Logik und Wahrscheinlichkeit. Springer: Vienna. Cartwright, N., J. Cat, L. Fleck, and T. Uebel (1996). Between Science and Politics: The Philosophy of Otto Neurath. Cambridge University Press: Cambridge. Carus, A. W. (2007). Carnap and Twentieth-Century Thought. Cambridge University Press: Cambridge. Cassirer, E. (1910). Substanzbegriff und Funktionsbegriff: Untersuchungen über die Grundfragen der Erkenntniskritik. Verlag von Bruno Cassirer: Berlin. Chaitin, G. J. (1990). Information, Randomness & Incompleteness: Papers on Algorithmic Information Theory (2nd ed.). World Scientific: Singapore. Chalmers, D., D. Manley, and R. Wasserman (Eds.) (2009). Metametaphysics: New Essays on the Foundations of Ontology. Oxford University Press: Oxford. Chalmers, D. J. (2012). Constructing the World. Oxford University Press: Oxford. Chang, H. (2004). Inventing Temperature: Measurement and Scientific Progress. Oxford University Press: Oxford. Church, A. (1940). On the Concept of a Random Sequence. Bulletin of the American Mathematical Society 46 (2), 130–135. Coffa, J. A. (1991). The Semantic Tradition from Kant to Carnap: To the Vienna Station. Cambridge University Press: Cambridge. Cohen, R. S. (Ed.) (1981). Inquiries and Provocations: Selected Writings of Herbert Feigl, 1929-1974. Kluwer Academic Publishers: Dordrecht. 191 Bibliography Creath, R. (1987). The Initial Reception of Carnap's Doctrine of Analyticity. Nous 21 (4), 477–499. Creath, R. (Ed.) (1990a). Dear Carnap, Dear Van: The Quine-Carnap Correspondence and Related Work. University of California: Berkeley. Creath, R. (1990b). The Unimportance of Semantics. In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, pp. 405–416. Creath, R. (1991). Every Dogma Has Its Day. Erkenntnis 35 (1-3), 347–389. Creath, R. (1992). Carnap's Conventionalism. Synthese 93 (1), 141–165. Creath, R. (1995). Are Dinosaurs Extinct? Foundations of Science 1 (2), 285–297. Creath, R. (1996). Languages without Logic. In R. N. Giere and A. W. Richardson (Eds.), Origins of Logical Empiricism, pp. 251–65. University of Minnesota Press: Minneapolis. Creath, R. (2003). The Linguistic Doctrine and Conventionality: The Main Argument in 'Carnap and Logical Truth'. In G. L. Hardcastle and A. W. Richardson (Eds.), Logical Empiricism in North America, pp. 234–256. University of Minnesota Press: Minneapolis. Creath, R. (2009). The Gentle Strength of Tolerance: The Logical Syntax of Language and Carnaps Philosophical Programme. In P. Wagner (Ed.), Carnap's logical Syntax of Language, pp. 203–214. Palgrave-MacMillan: London. De Finetti, B. (1931). Sul Significato Soggettivo della Probabilità. Fundamenta Mathematicae XVII, 298–329. De Finetti, B. (1937). La Prévision: Ses lois Logiques, ses Sources Subjectives. In Annales de l'institut Henri Poincaré, Volume 7, pp. 1–68. Presses universitaires de France. Demey, L., B. Kooi, and J. Sack (2014). Logic and Probability. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 ed.). Domski, M. and M. Dickson (2010). Discourse on a New Method. Open Court Publishing: Chicago. Douglas, H. (2009). Science, Policy, and the Value-Free Ideal. Pittsburgh University Press: Pittsburgh. Douglas, H. (2010). Engagement for Progress: Applied Philosophy of Science in Context. Synthese 177 (3), 317–335. Douglas, H. (2014a). Pure Science and the Problem of Progress. Studies in History and Philosophy of Science 46, 55–63. Douglas, H. (2014b). The Value of Cognitive Values. Philosophy of Science 80 (5), 796–806. Dubs, H. H. and H. Feigl (1934). The Principle of Induction. Philosophy of Science 1 (4), 482–486. Dupré, J. (2008). The Constituents of Life. Uitgeverij Van Gorcum. Durrett, R. (2005). Probability: Theory and Examples (3rd ed.). Thomson. Dutilh Novaes, C. and E. Reck (2015). Carnapian Explication, Formalisms as Cognitive Tools, and the Paradox of Adequate Formalization. Synthese, 1–21. Dym, C. L. and D. C. Brown (2012). Engineering Design: Representation and Reasoning. Cambridge University Press: Cambridge. 192 Bibliography Earman, J. (1992). Bayes Or Bust?: A Critical Examination of Bayesian Confirmation Theory. MIT Press: Cambridge. Earman, J. (1993). Carnap, Kuhn, and the Philosophy of Scientific Methodology. In P. Horowitz (Ed.), World Changes: Thomas Kuhn and the Philosophy of Science, pp. 9–36. University of Pittsburgh Press: Pittsburgh. Easwaran, K. (2014). Regularity and Hyperreal Credences. Philosophical Review 123 (1), 1–41. Edwards, A. W. F. (1972). Likelihood. Cambridge University Press: Cambridge. Eells, E. (1982). Rational Decision and Causality. Cambridge University Press: Cambridge. Efron, B. (1975). Biased versus Unbiased Estimation. Advances in Mathematics 16 (3), 259–277. Erickson, P., J. L. Klein, L. Daston, R. Lemov, T. Sturm, and M. D. Gordin (2013). How Reason Almost Lost its Mind: The Strange Career of Cold War Rationality. University of Chicago Press: Chicago. Feigl, H. (1930). Wahrscheinlichkeit und Erfahrung. Erkenntnis 1 (1), 249–259. Feigl, H. (1934). The Logical Character of the Principle of Induction. Philosophy of Science 1 (1), 20–29. Feigl, H. (1950). Existential Hypotheses. Realistic Versus Phenomenalistic Interpretations. Philosophy of Science 17 (1), 35–62. Feller, W. (1968). An Introduction to Probability Theory and Its Applications: Volume 1 (3rd ed.). J. Wiley & Sons: New York. Festa, R. (1993). Optimum Inductive Methods. Kluwer Academic Publishers: Dordrecht. Fishburn, P. C. (1981). Subjective Expected Utility: A Review of Normative Theories. Theory and Decision 13 (2), 139–199. Fisher, R. A. (1922). On the Mathematical Foundations of Theoretical Statistics. Philosophical Transactions of the Royal Society of London. Series A., 309–368. Florman, S. C. (1996). The Existential Pleasures of Engineering (2nd ed.). Macmillan: London. French, C. F. (2015a). Explicating Formal Epistemology: Carnap's Legacy as Jeffrey's Radical Probabilism. Studies in the History and Philosophy of Science. Forthcoming. French, C. F. (2015b). Rudolf Carnap: Philosophy of Science as Engineering Explications. In U. Mäki, S. Ruphy, G. Schurz, and I. Votsis (Eds.), Recent Developments in the Philosophy of Science: EPSA13 Helsinki. Springer. Forthcoming. Friedman, M. (1999). Reconsidering Logical Positivism. Cambridge University Press: Cambridge. Friedman, M. (2001). Dynamics of Reason. CSLI Publications: Stanford. Friedman, M. (2007). Introduction: Carnap's Revolution in Philosophy. In M. Friedman and R. Creath (Eds.), The Cambridge Companion to Carnap, pp. 1–18. Cambridge University Press: Cambridge. Friedman, M. (2009). Tolerance, Intuition, and Empiricism. In P. Wagner (Ed.), Carnap's Logical Syntax of Language, pp. 236–249. Palgrave-MacMillan: London. Friedman, M. and R. Creath (2007). The Cambridge Companion to Carnap. Cambridge University Press: Cambridge. 193 Bibliography Frost-Arnold, G. (2013). Carnap, Tarski, and Quine at Harvard: Conversations on Logic, Mathematics, and Science. Open Court: La Salle. Galavotti, M. C. (2005). Philosophical Introduction to Probability. CSLI Publications: Stanford. Galavotti, M. C. (2011a). On Hans Reichenbach's Inductivism. Synthese 181 (1), 95–111. Galavotti, M. C. (2011b). The Modern Epistemic Interpretations of Probability: Logicism and Subjectivism. In D. M. Gabby and J. Woods (Eds.), Handbook of the History of Logic: Inductive Logic, Volume 10, pp. 153–203. North Holland: Amsterdam. Galison, P. (1997). Image and Logic: A Material Culture of Microphysics. University of Chicago Press: Chicago. Gibbs-Smith, C. H. (1960). The Aeroplane: An Historical Survey of its Origins and Development. Her Majesty's Stationery Office: London. Gibbs-Smith, C. H. (1966). The Invention of the Aeroplane, 1799-1909. Taplinger: New York. Giere, R. and A. W. Richardson (Eds.) (1996). Origins of Logical Empiricism. University of Minnesota Press: Minneapolis. Giere, R. N. (1973). History and Philosophy of Science: Intimate Relationship or Marriage of Convenience? The British Journal for the Philosophy of Science 24 (3), 282–297. Giere, R. N. (1985). Philosophy of Science Naturalized. Philosophy of Science 52 (3), 331–356. Giere, R. N. (1988). Explaining Science: A Cognitive Approach. University of Chicago Press: Chicago. Gillies, D. (2000). Philosophical Theories of Probability. Routledge: London. Giocoli, N. (2013). From Wald To Savage: Homo Economicus Becomes a Bayesian Statistician. Journal of the History of the Behavioral Sciences 49 (1), 63–95. Glymour, C. and F. Eberhardt (2014). Hans Reichenbach. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 ed.). Glymour, C. and K. T. Kelly (1992). Thoroughly Modern Meno. In Inference, Explanation, and other Frustrations: Essays in the Philosophy of Science, pp. 3–22. University of California Press: Berkeley. Gödel, K. (1953). Is Mathematics Syntax of Language? In K. Gödel Collected Works, Volume 3, pp. 334–355. Oxford University Press: Oxford. Goldfarb, W. (1995). Introductory Note to *1953/9. In K. Gödel Collected Works, Volume 3, pp. 324–333. Oxford University Press: Oxford. Goldfarb, W. and T. Ricketts (1992). Carnap and the philosophy of mathematics. In Science and Subjectivity, pp. 61 – 78. Akademie Verlag: Berlin. Good, I. J. (1950). Probability and The Weighing of Evidence. Charles Griffin & Company Limited: London. Good, I. J. (1965). The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT Press: Cambridge. Goodman, N. (1946). A Query on Confirmation. The Journal of Philosophy 43 (14), 383–385. Goodman, N. (1955). Fact, Fiction, and Forecast. Harvard University Press: Cambridge. 194 Bibliography Hacking, I. (1965). Logic of Statistical Inference. Cambridge University Press: Cambridge. Hacking, I. (1971). The Leibniz-Carnap Program for Inductive Logic. The Journal of Philosophy 68 (19), 597–610. Hacking, I. (2006). The Emergence of Probability: a philosophical study of early ideas about probability, induction and statistical inference (2nd ed.). Cambridge University Press: Cambridge. Hájek, A. (2003). What Conditional Probability Could Not Be. Synthese 137 (3), 273–323. Hájek, A. (2005). Scotching Dutch Books? Philosophical Perspectives 19 (1), 139–151. Hájek, A. (2012). Interpretations of probability. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2012 ed.). Hardcastle, G. L. and A. W. Richardson (Eds.) (2003). Logical Empiricism in North America. University of Minnesota Press: Minneapolis. Hastie, T., R. Tibshirani, and J. Friedman (2010). The Elements of Statistical Learning. Springer: New York. Heidelberger, M. (2001). Origins of the Logical Theory of Probability: von Kries, Wittgenstein, Waismann. International Studies in the Philosophy of Science 15 (2), 177–188. Heims, S. J. (1991). The Cybernetics Group 1946-1953, Constructing a Social Science for Postwar America. MIT Press: Cambridge. Hesse, M. B. (1966). Models and Analogies in Science. University of Notre Dame Press: Notre Dame. Heukelom, F. (2014). Behavioral Economics: A History. Cambridge University Press: Cambridge. Hillier, S. (2007). Understanding Logical Empiricism: Language Engineering in Carnap's Logical Syntax of Language. Ph. D. thesis, University of California, Irvine. Hilpinen, R. (1968). Rules of Acceptance and inductive Logic. North Holland: Amsterdam. Hintikka, J. (Ed.) (1975). Rudolf Carnap, Logical Empiricist. Springer: Netherlands. Hodges, J. L. and E. L. Lehman (1950). Minimax Point Estimation. Annals of Mathematical Statistics 21, 182–197. Hosiasson-Lindenbaum, J. (1940). On Confirmation. Journal of Symbolic Logic 5 (4), 133–148. Howie, D. (2002). Interpreting Probability: Controversies and Developments in the Early Twentieth Century. Cambridge University Press: Cambridge. Howson, C. and P. Urbach (2006). Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing: Chicago. Irzik, G. and T. Grünberg (1995). Carnap and Kuhn: Arch Enemies or Close Allies? British Journal for the Philosophy of Science 46 (3), 285–307. Isaac, J. (2012). Working Knowledge: Making the Human Sciences from Parsons to Kuhn. Harvard University Press: Cambridge. Jaynes, E. T. (1957a). Information Theory and Statistical Mechanics. Physical review 106 (4), 620. 195 Bibliography Jaynes, E. T. (1957b). Information Theory and Statistical Mechanics. II. Physical review 108 (2), 171. Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press: Cambridge. Jeffrey, R. C. (1957, June). Contributions to the Theory of Inductive Probability. Ph. D. thesis, Princeton University. Jeffrey, R. C. (1966). Goodman's Query. The Journal of Philosophy 63 (11), 281–288. Jeffrey, R. C. (1970). Dracula Meets Wolfman: Acceptance vs. Partial Belief. In M. Swain (Ed.), Induction, Acceptance, and Rational Belief, Volume 26, pp. 157–185. Reidel: Dordrecht. Jeffrey, R. C. (1973). Carnap's Inductive Logic. Synthese 25, 299–306. Jeffrey, R. C. (1974). Carnap's Empiricism. In G. Maxwell and Anderson, Robert M. Jr. (Eds.), Minnesota Studies in the Philosophy of Science, Volume 6. University of Minnesota Press: Minneapolis. Jeffrey, R. C. (Ed.) (1980). Studies in Inductive Logic and Probability, Volume 2. University of California Press: Los Angeles. Jeffrey, R. C. (1989). Reading probabilismo. Erkenntnis 31 (2-3), 225–237. Jeffrey, R. C. (1990). The Logic of Decision (3rd, revised ed.). University of Chicago Press: Chicago. Jeffrey, R. C. (1992a). Probability and the Art of Judgment. Cambridge University Press: Cambridge. Jeffrey, R. C. (1992b). Radical Probabilism (Prospectus for a User's Manual). Philosophical Issues 2, 193–204. Jeffrey, R. C. (1994). Carnap's Voluntarism. In D. Prawitz, B. Skyrms, and D. Westerståhl (Eds.), Logic, Methodology and Philosophy of Science, Volume IX, pp. 847–866. Elsevier: Amsterdam. Jeffrey, R. C. (2004). Subjective probability: The Real Thing. Cambridge University Press: Cambridge. Jeffrey, R. C. and R. Carnap (Eds.) (1971). Studies in Inductive Logic and Probability, Volume 1. University of California Press: Los Angeles. Jeffreys, H. (1931). Scientific Inference. Cambridge University Press: Cambridge. Jeffreys, H. (1939). Theory of Probability. Oxford University Press: Oxford. Johnson, A. (2009). Hitting the Brakes: Engineering Design and the Production of Knowledge. Duke University Press: Durham. Johnson, W. E. (1932). Probability: The Deductive and Inductive Problems. Mind 41, 409–423. Justus, J. (2012). Carnap on Concept Determination: Methodology for Philosophy of Science. European Journal for Philosophy of Science 2 (2), 161–179. Kahneman, D. and A. Tversky (1979). Prospect Theory: An Analysis of Decision Under Risk. Econometrica: Journal of the Econometric Society , 263–291. Kelly, K. T. (1996). The Logic of Reliable Inquiry. Oxford University Press: Oxford. 196 Bibliography Kemeny, J. G. (1951). Carnap on Probability. The Review of Metaphysics, 145–156. Kemeny, J. G. (1953). A Logical Measure Function. The Journal of Symbolic Logic 18 (04), 289–308. Kemeny, J. G. (1955). Fair Bets and Inductive Probabilities. The Journal of Symbolic Logic 20 (3), 263–273. Kemeny, J. G. (1956a). A New Approach to Semantics–Part I. The Journal of Symbolic Logic 21 (01), 1–27. Kemeny, J. G. (1956b). A New Approach to Semantics–Part II. The Journal of Symbolic Logic 21 (02), 149–161. Kemeny, J. G. (1963). Carnap's Theory of Probability and Induction. In P. A. Schilpp (Ed.), The Philosophy of Rudolf Carnap, pp. 711–738. Open Court: La Salle. Kendall, M. G. (1943/1948). The Advanced Theory of Statistics, Two Volumes. Charles Griffin: London. Keynes, J. M. (1921). A Treatise on Probability. Macmillan: London. Kitcher, P. (1992). The Naturalists Return. The Philosophical Review 101 (1), 53–114. Kitcher, P. (1993). The Advancement of Science: Science without Legend, Objectivity without Illusion. Oxford University Press: New York. Kitcher, P. (2010). Carnap and the Caterpillar. Philosophical Topics 36 (1), 111–127. Kjeldsen, T. H. (2001). John von Neumann's Conception of the Minimax Theorem: A Journey Through Different Mathematical Contexts. Archives for the History of Exact Sciences 56, 39–68. Köhler, E. (2001). Why Von Neumann Rejected Carnap's Dualism of Information Concepts. In M. Rédei and M. Stöltzner (Eds.), John von Neumann and the Foundations of Quantum Physics, pp. 97–134. Kluwer Academic Publishers: Dordrecht. Kolmogorov, A. N. (1933). Grundlagen der Wahrscheinlichkeitsrechnung. Springer: Berlin. Koopman, B. O. (1940). The Axioms and Algebra of Intuitive Probability. Annals of Mathematics, 269–292. Krüger, L., L. J. Daston, and M. Heidelberger (Eds.) (1990). The Probabilistic Revolution, Volume 1. MIT Press: Cambridge. Krüger, L., G. Gigerenzer, and M. S. Morgan (Eds.) (1990). The Probabilistic Revolution, Volume 2. MIT Press: Cambridge. Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press: Chicago. Kuhn, T. S. (1977). Objectivity, Value Judgment, and Theory Choice. In The Essential Tension: Selected Studies in Scientific Tradition and Change, pp. 320–329. University of Chicago Press: Chicago. Kuhn, T. S. (1983). Rationality and Theory Choice. The Journal of Philosophy , 563–570. Kuipers, T. A. F. (1978). Studies in Inductive Probability and Rational Expectation. D. Reidel: Dordrecht. Kuipers, T. A. F. (2007). Explication in Philosophy of Science. In T. A. F. Kuipers (Ed.), 197 Bibliography Handbook of the Philosophy of Science. General Philosophy of Science–Focal Issues, pp. vii– xxiii. Elsevier: Amsterdam. Kusch, M. (1995). Psychologism: A Case Study in the Sociology of Philosophical Knowledge. Psychology Press: New York. Kyburg, H. E. J. (1964). Recent Work in Inductive Logic. American Philosophical Quarterly 1 (4), 249–287. Kyburg, H. E. J. and H. G. Smolker (Eds.) (1964). Studies in Subjective Probability. Wiley: New York. Lakatos, I. (1968). Changes in the Problem of Inductive Logic. In I. Lakatos (Ed.), The Problem of Inductive Logic, pp. 315–417. Elsevier: Amsterdam. Laudan, L. (1996). Beyond Positivism and Relativism. Westview: Boulder, CO. Lehman, R. S. (1955). On Confirmation and Rational Betting. The Journal of Symbolic Logic 20 (3), 251–262. Lenz, J. W. (1956). Carnap on Defining "Degree of Confirmation". Philosophy of Science 23 (3), 230–236. Leonard, R. (2010). Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900–1960. Cambridge University Press: Cambridge. Levi, I. (1960). Must the Scientist Make Value Judgments? Journal of Philosophy 57 (11), 345–357. Levi, I. (1967). Gambling with Truth: An Essay on Induction and the Aims of Science. MIT Press: Cambridge. Levi, I. (1980). The Enterprise of Knowledge: An Essay on Knowledge, Credal Probability, and Chance. MIT Press: Cambridge. Lewis, D. K. (1986). On the Plurality of Worlds. Cambridge University Press: Cambridge Press. Luce, R. D. and H. Raiffa (1957). Games and Decisions: Introduction and Critical Survey. Wiley: New York. Maher, P. (2010). Explication of Inductive Probability. Journal of Philosophical Logic 39 (6), 593–616. Mahoney, M. (2004). Finding a History for Software Engineering. Annals of the History of Computing, IEEE 26, 8–19. Meacham, C. J. and J. Weisberg (2011). Representation Theorems and the Foundations of Decision Theory. Australasian Journal of Philosophy 89 (4), 641–663. Mellor, D. H. (Ed.) (1990). F. P. Ramsey: Philosophical Papers. Cambridge University Press: Cambridge. Menger, K. (1979). Karl Menger: Selected papers in Logic and Foundations, Didactics, Economics. D. Reidel: Dordrecht. Mitcham, C. (1994). Thinking through Technology: The Path Between Engineering and Philosophy. University of Chicago Press: Chicago. Morgan, M. S. (2012). The World in the Model. How Economists Work and Think. Cambridge University Press: Cambridge. 198 Bibliography Nagel, E. (1939). Principles of the Theory of Probability. In International Encyclopedia of Unified Science, Volume 1. University of Chicago Press: Chicago. Newell, A. and H. A. Simon (1956). The Logic Theory Machine – A Complex Information Processing System. IRE Transactions on Information Theory 2 (3), 61–79. Neyman, J. (1938). L'estimation Statistique, Traitée comme un Problème Classique de Probabilité. Actualités Scientifiques et Industrielles 739, 25–57. Niiniluoto, I. (2009). The Development of the Hintikka Program. In D. M. Gabbay, S. Hartmann, and J. Woods (Eds.), Handbook of the History of Logic., Volume 10, pp. 311–356. Elsevier: Amsterdam. Norton, J. (2003). A Material Theory of Induction. Philosophy of Science 70, 647–70. Norton, J. (2010). There are No Universal Rules for Induction. Philosophy of Science 77, 765–77. Ortner, R. and H. Leitgeb (2009). Mechanizing Induction. In D. M. Gabby, S. Hartmann, and J. Woods (Eds.), Handbook of the History of Logic: Inductive Logic, Volume 10, pp. 719–772. Elsevier: Amsterdam. Padovani, F. (2008). Probability and Causality in the Early Works of Hans Reichenbach. Ph. D. thesis, University of Geneva. Padovani, F. (2011). Relativizing the Relativized a Priori: Reichenbach's Axioms of Coordination Divided. Synthese 181 (1), 41–62. Pakszys, E. (1998). Women's Contributions to the Achievements of the Lvov-Warsaw School: A Survey. In K. Kijania-Placek and J. Woleński (Eds.), The Lvov-Warsaw School and Contemporary Philosophy, Volume 273 of Synthese Library, pp. 55–71. Springer: Netherlands. Petroski, H. (1992). To Engineer is Human: The Role of Failure in Successful Design. Vintage books: New York. Petroski, H. (2012). To Forgive Design: Understanding Failure. Harvard University Press: Cambridge. Polanyi, M. (1958). Personal Knowledge: Towards a Post-critical Philosophy. Routledge & Kegan Paul: London. Porter, T. M. (1986). The Rise of Statistical Thinking, 1820-1900. Princeton University Press: Princeton. Putnam, H. (1963). 'Degree of confirmation' and Inductive Logic. In P. A. Schilpp (Ed.), The Philosophy of Rudolf Carnap, pp. 761–783. Open Court: La Salle. Quine, W. v. O. (1951). Main Trends in Recent Philosophy: Two Dogmas of Empiricism. The Philosophical Review 60, 20–43. Quine, W. v. O. (1969). Ontological Relativity, and other essays. Columbia University Press: New York. Radder, H. (Ed.) (2003). The Philosophy of Scientific Experimentation. University of Pittsburgh Press: Pittsburgh. Ramsey, F. P. (1926 [1990]). Truth and Probability. In D. H. Mellor (Ed.), F. P. Ramsey: Philosophical Papers, pp. 52–94. Cambridge University Press: Cambridge. Reck, E. (2012). Carnapian Explication: A Case Study and Critique. In P. Wagner (Ed.), 199 Bibliography Carnap's Ideal of Explication and Naturalism, pp. 96–116. Palgrave-MacMillan: London. Reck, E. H. (Ed.) (2013). The Historical Turn in Analytic Philosophy. Palgrave-MacMillan: London. Redmond, K. and T. M. Smith (2000). From Whirlwind to MITRE: The R&D Story of the SAGE Air Defense Computer. MIT Press: Cambridge. Reichenbach, H. (1920). Relativitätstheorie und Erkenntnis apriori. Verlag von Julius Springer: Berlin. Reichenbach, H. (1930). Kausalität Und Wahrscheinlichkeit. Erkenntnis 1 (1), 158–188. Reichenbach, H. (1932). Axiomatik der Wahrscheinlichkeitsrechnung. Mathematische Zeitschrift 34 (4), 568–619. Reichenbach, H. (1935). Wahrscheinlichkeitslehre: eine Untersuchung über die logischen und mathematischen Grundlagen der Wahrscheinlichkeitsrechnung. A. W. Sijthoff's Uitgeversmij: Leyden. Reichenbach, H. (1938). Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge. University of Chicago Press: Chicago. Reichenbach, H. (1945). Reply to Donald C. Williams' Criticism of the Frequency Theory of Probability. Philosophy and Phenomenological Research 5 (4), 508–512. Reichenbach, H. (1949). The Theory of Probability: An Inquiry into the Logical and Mathematical Foundations of the Calculus of Probability. University of California Press: Berkeley. Reichenbach, H. (2008). The Concept of Probability in the Mathematical Representation of Reality. Open Court: La Salle. Reisch, G. A. (1991). Did Kuhn Kill Logical Empiricism? Philosophy of Science 58 (2), 264–277. Reisch, G. A. (2005). How the Cold War Transformed Philosophy of Science: To the Icy Slopes of Logic. Cambridge University Press: Cambridge. Richardson, A. W. (1994). The Limits of Tolerance: Carnap's Logico-Philosophical Project in Logical Syntax. Proceedings of Aristotelian Society , 67–82. Richardson, A. W. (1996). From Epistemology to the Logic of Science: Carnap's Philosophy of Empirical Knowledge in the 1930s. In R. Giere and A. W. Richardson (Eds.), Origins of Logical Empiricism, pp. 309–332. University of Minnesota Press: Minneapolis. Richardson, A. W. (1997a). Toward a History of Scientific Philosophy. Perspectives on ScienceHistorical Philosophical and Social 5 (3), 418–451. Richardson, A. W. (1997b). Two Dogmas about Logical Empiricism: Carnap and Quine on Logic, Epistemology, and Empiricism. Philosophical Topics 25, 145–168. Richardson, A. W. (1998). Carnap's Construction of the World: The Aufbau and the Emergence of Logical Empiricism. Cambridge University Press: Cambridge. Richardson, A. W. (2000). Science as Will and Representation: Carnap, Reichenbach, and the Sociology of Science. Philosophy of Science 67 (3), 162. Richardson, A. W. (2002). Engineering Philosophy of Science: American Pragmatism and Logical Empiricism in the 1930s. Proceedings of the Philosophy of Science Association 2002 (3), 36–47. 200 Bibliography Richardson, A. W. (2004). Tolerating Semantics: Carnap's Philosophical Point of View. In S. Awodey and C. Klein (Eds.), Carnap Brought Home: The View from Jena, pp. 63–78. The Open Court: Chicago. Richardson, A. W. (2005). 'The Tenacious, Malleable, Indefatigible, and Yet, Eternally Modifiable Will': Hans Reichenbach's Knowing Subject. Proceedings of Aristotelian Society 79, 73 – 87. Richardson, A. W. (2007). 'That Sort of Everyday Image of Logical Positivism': Thomas Kuhn and the Decline of Logical Empiricist Philosophy of Science. In A. W. Richardson and T. Uebel (Eds.), Cambridge Companion to Logical Empiricism, pp. 346–370. Cambridge University Press: Cambridge. Richardson, A. W. (2011). But What Then Am I, This Inexhaustible, Unfathomable Historical Self? Or, Upon What Ground May One Commit Empiricism? Synthese 178 (1), 143–154. Richardson, A. W. (2013). Taking the Measure of Carnap's Philosophical Engineering: Metalogic as Metrology. In E. H. Reck (Ed.), The Historical Turn in Analytic Philosophy, pp. 60–77. Palgrave-MacMillan: London. Ricketts, T. (1994). Carnap's Principle of Tolerance, Empiricism, and Conventionalism. In B. Hale (Ed.), Reading Putnam, pp. 176–200. Oxford University Press: Blackwell. Ricketts, T. (1996). Carnap: From Logical Syntax to Semantics. In R. N. Giere and A. W. Richardson (Eds.), Origins of Logical Empiricism, pp. 231–50. University of Minnesota Press: Minneapolis. Ricketts, T. (2003). Languages and Calculi. In G. L. Hardcastle and A. W. Richardson (Eds.), Logical Empiricism in North America, pp. 257–280. University of Minnesota Press: Minneapolis. Romeijn, J. W. (2009). Inductive Logic and Statistics. In D. M. Gabbay, S. Hartmann, and J. Woods (Eds.), Handbook of the History of Logic: Inductive Logic, Volume 10, pp. 625–650. Elsevier: Amsterdam. Rosenkrantz, R. D. (1981). Foundations and Applications of Inductive Probability. Ridgeview Press: Atascadero, CA. Rudner, R. (1953). The Scientist Qua Scientist Makes Value Judgments. Philosophy of Science 20 (1), 1–6. Salmon, W. C. (1957). Should We Attempt to Justify Induction? Philosophical Studies 8 (3), 33–48. Salmon, W. C. (1963). On Vindicating Induction. Philosophy of Science 30 (3), 252–261. Salmon, W. C. (1967). The Foundations of Scientific Inference. University of Pittsburgh Press: Pittsburgh. Salmon, W. C. (1988). Dynamic Rationality: Propensity, Probability, and Credence. In J. H. Fetzer (Ed.), Essays in Honor of Wesley C. Salmon, pp. 3–40. Springer: Netherlands. Sarkar, S. (2013). Carnap and the Compulsions of Interpretation: Reining in the liberalization of empiricism. European Journal for Philosophy of Science 3 (3), 353–372. Savage, L. J. (1951). The Theory of Statistical Decision. Journal of the American Statistical Association 46, 55–67. 201 Bibliography Savage, L. J. (1954 [1972]). The Foundations of Statistics. John Wiley & Sons: New York. Savage, L. J. (1964). The foundations of statistics reconsidered. In H. E. J. Kyburg and H. G. Smolker (Eds.), Studies in Subjective Probability, pp. 173–188. Wiley: New York. Schickore, J. (2014). Scientific discovery. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 ed.). Schickore, J. and F. Steinle (Eds.) (2006). Revisiting Discovery and Justification. Springer: Netherlands. Schilpp, P. A. (Ed.) (1963). The Philosophy of Rudolf Carnap. Open Court: La Salle. Scott, D. and P. Krauss (1966). Assigning Probabilities to Logical Formulas. In J. Hintikka and P. Suppes (Eds.), Aspects of Inductive Logic, Volume 43, pp. 219 – 264. Elsevier: Amsterdam. Shepard, J. and J. Justus (2014). X-Phi and Carnapian Explication. Erkenntnis 80 (2), 381–402. Shimony, A. (1955). Coherence and the Axioms of Confirmation. The Journal of Symbolic Logic 20 (1), 1–28. Shimony, A. (1992). On Carnap: Reflections of a Metaphysical Student. Synthese 93 (1), 261– 274. Simon, H. A. (1957). Models of Man: Social and Rational; Mathematical Essays on Rational Human Behavior in Society Setting. Wiley: New York. Simon, H. A. (1973). The Structure of Ill-Structured Problems. Artificial Intelligence 4, 181–201. Simon, H. A. (1996). The Sciences of the Artificial (3rd ed.). MIT Press: Cambridge. Skidelsky, R. J. A. (2003). John Maynard Keynes, 1883-1946: Economist, Philosopher, Statesman. Macmillan: London and Basingstoke. Skyrms, B. (1975). Choice and Chance: An Introduction to Inductive Logic. Dickenson Pub. Co. Skyrms, B. (1984). Pragmatics and Empiricism. Yale University Press: New Haven. Skyrms, B. (2012). From Zeno to Arbitrage: Essays on Quantity, Coherence, and Induction. Oxford University Press: Oxford. Soames, S. (2003). Philosophical Analysis in the Twentieth Century, volumes 1 & 2. Princeton University Press: Princeton. Sober, E. (2008). Evidence and Evolution: The Logic Behind the Science. Cambridge University Press: Cambridge. Sober, E. (2011). Reichenbach's Cubical Universe and the Problem of the External World. Synthese 181 (1), 3–21. Spirtes, P., C. N. Glymour, and R. Scheines (2000). Causation, Prediction, and Search (2nd ed.). MIT Press: Cambridge. Sprenger, J. (2009). Statistics between inductive logic and empirical science. Journal of Applied Logic 7 (2), 239–250. Stadler, F. (2001). The Vienna Circle: Studies in the Origins, Development, and Influences of Logical Empiricism. Springer: Vienna. Steele, K. (2012). The Scientist Qua Policy Advisor Makes Value Judgments. Philosophy of 202 Bibliography Science 79 (5), 893–904. Stein, H. (1992). Was Carnap Entirely Wrong, After All? Synthese 1 (2), 275–295. Stein, H. (2004). The Enterprise of Understanding and the Enterprise of Knowledge. Synthese 140 (1), 135–176. Sterrett, S. G. (2002). Physical pictures: Engineering models circa 1914 and in Wittgenstein's Tractatus. In History of Philosophy of Science, pp. 121–135. Springer. Stigler, S. M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press: Cambridge. Stigler, S. M. (2002). Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press: Cambridge. Stigler, S. M. (2013). University of Chicago Department of Statistics. In A. Agresti and X.-L. Meng (Eds.), Strength in Numbers: The Rising of Academic 339 Statistics Departments in the U.S., pp. 339 – 351. Springer: New York. Strawson, P. F. (1963). Carnap's Views on Conceptual Systems versus Natural Languages in Analytic Philosophy. In P. A. Schilpp (Ed.), The Philosophy of Rudolf Carnap, pp. 503–518. Open Court: La Salle. Suppe, F. (Ed.) (1977). The Structure of Scientific Theories. University of Illinois Press: Urbana. Swain, M. (Ed.) (1970). Induction, Acceptance, and Rational Belief. Reidel: Dordrecht. Todhunter, I. (1865). A History of the Mathematical Theory of Probability: From the Time of Pascal to that of Laplace. Macmillan and Company: London. Toulmin, S. (1972). Human Understanding, Vol. I: The Collective Use and Evolution of Concepts. Princeton University Press: Princeton. Uebel, T. (2007). Empiricism at the Crossroads: The Vienna Circle's Protocol-Sentence Debate. Open Court: La Salle. Uebel, T. (2012a). Carnap's Logic of Science and Personal Probability. In D. Dieks, W. J. Gonzalez, S. Hartmann, M. Stöltzner, and M. Weber (Eds.), Probabilities, Laws, and Structures, Volume 3 of The Philosophy of Science in a European Perspective, pp. 469–479. Springer: Netherlands. Uebel, T. (2012b). The Bipartite Conception of Metatheory and the Dialectical Conception of Explication. In P. Wagner (Ed.), Carnap's Ideal of Explication and Naturalism, pp. 117–130. Palgrave-MacMillan: London. Uebel, T. and A. W. Richardson (Eds.) (2007). The Cambridge Companion to Logical Empiricism. Cambridge University Press: Cambridge. van Fraassen, B. C. (1980). The Scientific Image. Oxford University Press: Oxford. van Fraassen, B. C. (1984). Belief and the Will. The Journal of Philosophy , 235–256. van Fraassen, B. C. (1989). Laws and Symmetry. Oxford University Press: Oxford. van Fraassen, B. C. (2000). The False Hopes of Traditional Epistemology. Philosophical and Phenomenological Research, 253–280. van Fraassen, B. C. (2002). The Empirical Stance. Yale University Press. van Fraassen, B. C. (2011). On Stance and Rationality. Synthese 178 (1), 155–169. 203 Bibliography Vapnik, V. (2000). The Nature of Statistical Learning Theory (2nd ed.). Springer: New York. Vincenti, W. G. (1990). What Engineers Know and How They Know It: Analytical studies from Aeronautical History. John Hopkins University: Baltimore. von Kries, J. (1886). Die Principien der Wahrscheinlichkeitsrechnung: eine logische Untersuchung. Akademische Verlagsbuchhandlung von J. C. B. Mohr: Freiburg. von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen 100, 295–320. von Neumann, J. and O. Morgenstern (1947). Theory of Games and Economic Behavior (2nd ed.). Princeton University: Princeton. Von Plato, J. (1998). Creating Modern Probability: Its Mathematics, Physics and Philosophy in Historical Perspective. Cambridge University Press: Cambridge. Wagner, P. (Ed.) (2009). Carnap's Logical Syntax of Language. Palgrave-MacMillan: London. Wagner, P. (2011). Carnap's Theories of Confirmation. In M. C. Galavotti (Ed.), Explanation, Prediction, and Confirmation, The Philosophy of Science in a European Perspective, pp. 477–486. Springer: Netherlands. Wagner, P. (Ed.) (2012). Carnap's Ideal of Explication and Naturalism. Palgrave-MacMillan: London. Waismann, F. (1930). Logische Analyse des Wahrscheinlichkeitsbegriffs. Erkenntnis 1 (1), 228– 248. Wald, A. (1939). Contributions to the Theory of Statistical Estimation and Testing Hypotheses. Annals of Mathematical Statistics 10, 299–326. Wald, A. (1945a). Sequential Tests of Statistical Hypotheses. Annals of Mathematical Statistics 16, 117–186. Wald, A. (1945b). Statistical Decision Functions which Minimize the Maximum Risk. Annals of Mathematics 46, 265–280. Wald, A. (1950). Statistical Decision Functions. Wiley: New York. Wallis, A. W. (1980). The Statistical Research Group, 1942-1945. Journal of the American Statistical Association 75, 320–330. Wiener, N. (1948). Cybernetics; Or Control and Communication in the Animal and the Machine. John Wiley: New York. Williamson, T. (2007). The Philosophy of Philosophy. John Wiley & Sons: New York. Wimsatt, W. C. (2007). Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality. Harvard University Press: Cambridge. Wittgenstein, L. (1921). Tractatus Logico Philosophicus. Annalen der Naturphilosophie 14 (1). Zabell, S. (2005). Symmetry and Its Discontents. Cambridge University Press: Cambridge. Zabell, S. (2009). Carnap and the Logic of Inductive Inference. In D. M. Gabbay, S. Hartmann, and J. Woods (Eds.), Handbook of the History of Logic: Inductive Logic, Volume 10, pp. 265–309. Elsevier: Amsterdam. Zilsel, E. (1916). Das Anwendungsproblem: Ein philosophischer Versuch über das Gesetz der grossen Zahlen und die Induktion. Johann Ambrosius Barth: Leipzig.