Mind changes and testability: How formal and statistical learning theory converge in the new Riddle of induction
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
This essay demonstrates a previously unnoticed connection between formal and statistical learning theory with regard to Nelson Goodman’s new riddle of induction. Discussions of Goodman’s riddle in formal learning theory explain how conjecturing “all green” before “all grue” can enhance efficient convergence to the truth, where efficiency is understood in terms of minimizing the maximum number of retractions or “mind changes.” Vapnik-Chervonenkis (VC) dimension is a central concept in statistical learning theory and is similar to Popper’s notion of degrees of testability. I show that for a class inductive problems of which Goodman’s riddle is one example, a reliable inductive method minimizes the maximum number of mind changes exactly if it always conjectures the hypothesis from the set with lowest VC dimension consistent with the data. I also discuss the relevance of these results to language invariance and curve fitting.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Colin Howson (2011). No Answer to Hume. International Studies in the Philosophy of Science 25 (3):279 - 284.
Daniel Steel (2011). On Not Changing the Problem: A Reply to Howson. International Studies in the Philosophy of Science 25 (3):285 - 291.
Robert Kowalenko (2012). Reply to Israel on the New Riddle of Induction. Philosophia 40 (3):549-552.
John D. Norton, The Formal Equivalence of Grue and Green and How It Undoes the New Riddle of Induction.
Gilbert Harman & Sanjeev Kulkarni, Statistical Learning Theory as a Framework for the Philosophy of Induction.
Oliver Schulte (1999). The Logic of Reliable and Efficient Inquiry. Journal of Philosophical Logic 28 (4):399-438.
David Corfield, Bernhard Schölkopf & Vladimir Vapnik (2009). Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. [REVIEW] Journal for General Philosophy of Science 40 (1):51 - 58.
Daniel Steel (2009). Testability and Ockham's Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction. [REVIEW] Journal of Philosophical Logic 38 (5):471 - 489.
Added to index2009-01-28
Total downloads23 ( #78,660 of 1,099,942 )
Recent downloads (6 months)1 ( #304,017 of 1,099,942 )
How can I increase my downloads?