Statistical learning theory as a framework for the philosophy of induction
|Abstract||Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.|
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
|External links||This entry has no external links. Add one.|
|Through your library||Only published papers are available at libraries|
Similar books and articles
Oliver Schulte, Formal Learning Theory. Stanford Encyclopedia of Philosophy.
S. Russell (1991). Inductive Learning by Machines. Philosophical Studies 64 (October):37-64.
Kevin T. Kelly, Oliver Schulte & Cory Juhl (1997). Learning Theory and the Philosophy of Science. Philosophy of Science 64 (2):245-267.
Fei Xu & Joshua B. Tenenbaum (2001). Rational Statistical Inference: A Critical Component for Word Learning. Behavioral and Brain Sciences 24 (6):1123-1124.
Kevin Kelly (2008). Review of Gilbert Harman, Sanjeev Kulkarni, Reliable Reasoning: Induction and Statistical Learning Theory. [REVIEW] Notre Dame Philosophical Reviews 2008 (3).
Daniel Steel (2009). Testability and Ockham's Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction. [REVIEW] Journal of Philosophical Logic 38 (5):471 - 489.
Daniel Steel, Mind Changes and Testability: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.
David Corfield, Bernhard Schölkopf & Vladimir Vapnik (2009). Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. [REVIEW] Journal for General Philosophy of Science 40 (1):51 - 58.
Sorry, there are not enough data points to plot this chart.
Added to index2009-01-28
Recent downloads (6 months)0
How can I increase my downloads?