|Abstract||Statistical Learning Theory (e.g., Hastie et al. 2001; Vapnik 1998, 2000, 2006; Devroye, Györfi, Lugosi 1996) is the basic theory behind contemporary machine learning and pattern recognition. We suggest that the theory provides an excellent framework for the philosophy of induction (see also Harman and Kulkarni 2007).|
|Keywords||No keywords specified (fix it)|
|Categories||No categories specified (fix it)|
|Through your library||Only published papers are available at libraries|
Similar books and articles
John L. Pollock (1992). The Theory of Nomic Probability. Synthese 90 (2):263 - 299.
Fei Xu & Joshua B. Tenenbaum (2001). Rational Statistical Inference: A Critical Component for Word Learning. Behavioral and Brain Sciences 24 (6):1123-1124.
S. Russell (1991). Inductive Learning by Machines. Philosophical Studies 64 (October):37-64.
Kevin T. Kelly, Oliver Schulte & Cory Juhl (1997). Learning Theory and the Philosophy of Science. Philosophy of Science 64 (2):245-267.
Daniel Steel (2009). Testability and Ockham's Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction. Journal of Philosophical Logic 38 (5):471 - 489.
Kevin Kelly (2008). Review of Gilbert Harman, Sanjeev Kulkarni, Reliable Reasoning: Induction and Statistical Learning Theory. [REVIEW] Notre Dame Philosophical Reviews 2008 (3).
Daniel Steel, Mind Changes and Testability: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.
David Corfield, Bernhard Schölkopf & Vladimir Vapnik (2009). Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. Journal for General Philosophy of Science 40 (1):51 - 58.
Added to index2009-01-28
Total downloads6 ( #147,054 of 556,837 )
Recent downloads (6 months)1 ( #64,847 of 556,837 )
How can I increase my downloads?