Testability and ockham's razor: How formal and statistical learning theory converge in the new Riddle of induction [Book Review]
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Journal of Philosophical Logic 38 (5):471 - 489 (2009)
Nelson Goodman’s new riddle of induction forcefully illustrates a challenge that must be confronted by any adequate theory of inductive inference: provide some basis for choosing among alternative hypotheses that fit past data but make divergent predictions. One response to this challenge is to distinguish among alternatives by means of some epistemically significant characteristic beyond fit with the data. Statistical learning theory takes this approach by showing how a concept similar to Popper’s notion of degrees of testability is linked to minimizing expected predictive error. In contrast, formal learning theory appeals to Ockham’s razor, which it justifies by reference to the goal of enhancing efficient convergence to the truth. In this essay, I show that, despite their differences, statistical and formal learning theory yield precisely the same result for a class of inductive problems that I call strongly VC ordered , of which Goodman’s riddle is just one example.
|Keywords||Goodman New riddle of induction Ockham’s razor Simplicity Testability Formal learning theory Statistical learning theory|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
David Chart (2000). Schulte and Goodman's Riddle. British Journal for the Philosophy of Science 51 (1):147 - 149.
Peter Godfrey-Smith (2003). Goodman's Problem and Scientific Methodology. Journal of Philosophy 100 (11):573 - 590.
Nelson Goodman (1946). A Query on Confirmation. Journal of Philosophy 43 (14):383-385.
Nelson Goodman (1983). Fact, Fiction, and Forecast. Harvard University Press.
Gilbert Harman & Sanjeev Kulkarni (2007). Reliable Reasoning: Induction and Statistical Learning Theory. A Bradford Book.
Citations of this work BETA
No citations found.
Similar books and articles
Kevin T. Kelly, Oliver Schulte & Cory Juhl (1997). Learning Theory and the Philosophy of Science. Philosophy of Science 64 (2):245-267.
Colin Howson (2011). No Answer to Hume. International Studies in the Philosophy of Science 25 (3):279 - 284.
Daniel Steel & S. Kedzie Hall (2011). What If the Principle of Induction Is Normative? Formal Learning Theory and Hume's Problem. International Studies in the Philosophy of Science 24 (2):171-185.
David Corfield, Bernhard Schölkopf & Vladimir Vapnik (2009). Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. [REVIEW] Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 40 (1):51 - 58.
Daniel Steel (2011). On Not Changing the Problem: A Reply to Howson. International Studies in the Philosophy of Science 25 (3):285 - 291.
Oliver Schulte (1999). The Logic of Reliable and Efficient Inquiry. Journal of Philosophical Logic 28 (4):399-438.
Gilbert Harman & Sanjeev Kulkarni, Statistical Learning Theory as a Framework for the Philosophy of Induction.
Added to index2009-08-10
Total downloads64 ( #33,751 of 1,696,461 )
Recent downloads (6 months)6 ( #92,976 of 1,696,461 )
How can I increase my downloads?