David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
In this article, we provide a tutorial overview of some aspects of statistical learning theory, which also goes by other names such as statistical pattern recognition, nonparametric classification and estimation, and supervised learning. We focus on the problem of two-class pattern classification for various reasons. This problem is rich enough to capture many of the interesting aspects that are present in the cases of more than two classes and in the problem of estimation, and many of the results can be extended to these cases. Focusing on two-class pattern classification simplifies our discussion, and yet it is directly applicable to a wide range of practical settings
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Gilbert Harman & Sanjeev Kulkarni, Statistical Learning Theory as a Framework for the Philosophy of Induction.
Erik D. Thiessen & Philip I. Pavlik (2013). iMinerva: A Mathematical Model of Distributional Statistical Learning. Cognitive Science 37 (2):310-343.
David Corfield, Bernhard Schölkopf & Vladimir Vapnik (2009). Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. [REVIEW] Journal for General Philosophy of Science 40 (1):51 - 58.
Richard M. Golden (1997). Model-Based Learning Problem Taxonomies. Behavioral and Brain Sciences 20 (1):73-74.
Daniel Steel, Mind Changes and Testability: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.
Katherine Yoshida, Mijke Rhemtulla & Athena Vouloumanos (2012). Exclusion Constraints Facilitate Statistical Word Learning. Cognitive Science 36 (5):933-947.
Kevin Kelly (2008). Review of Gilbert Harman, Sanjeev Kulkarni, Reliable Reasoning: Induction and Statistical Learning Theory. [REVIEW] Notre Dame Philosophical Reviews 2008 (3).
Daniel Steel (2009). Testability and Ockham's Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction. [REVIEW] Journal of Philosophical Logic 38 (5):471 - 489.
A. Vinter & P. Perruchet (1997). Relational Problems Are Not Fully Solved by a Temporal Sequence of Statistical Learning Episodes. Behavioral and Brain Sciences 20 (1):82-82.
Shimon Edelman, Unsupervised Statistical Learning in Vision: Computational Principles, Biological Evidence.
Sean Fulop & Nick Chater (2013). Editors' Introduction: Why Formal Learning Theory Matters for Cognitive Science. Topics in Cognitive Science 5 (1):3-12.
Stellan Ohlsson (1997). Old Ideas, New Mistakes: All Learning is Relational. Behavioral and Brain Sciences 20 (1):79-80.
Deborah G. Mayo (1997). Error Statistics and Learning From Error: Making a Virtue of Necessity. Philosophy of Science 64 (4):212.
Added to index2011-08-03
Total downloads17 ( #96,207 of 1,099,048 )
Recent downloads (6 months)2 ( #175,277 of 1,099,048 )
How can I increase my downloads?