Skip to main content
Log in

Varieties of Justification in Machine Learning

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bartlett, P., Jordan, M., & McAuliffe, J. (2006). Comment on support vector machines with applications. Statistical Science, 21, 341–346.

    Article  MathSciNet  Google Scholar 

  • Corfield, D., Schölkopf, B., & Vapnik, V. (2009). Falsificationism and statistical learning theory: Comparing the Popper and Vapnik-Chervonenkis dimensions. Journal for General Philosophy of Science, 40(1), 51–58.

    Article  Google Scholar 

  • Grünwald, P. (2007). The minimum description length principle. Cambridge: MIT Press.

    Google Scholar 

  • Harman, G., & Kulkarni, S. (2007). Reliable reasoning: Induction and statistical learning theory. Cambridge: MIT Press.

    Google Scholar 

  • LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., & Huang, F. (2006). A tutorial on energy-based learning. In G. Bakir, T. Hofman, B. Schölkopf, A. Smola & B. Taskar (Eds.), Predicting structured data. Cambridge: MIT Press.

    Google Scholar 

  • Minka, T. (2001). Empirical risk minimization is an incomplete inductive principle. http://www.research.microsoft.com/~minka/papers/minka-erm.pdf.

  • Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Schölkopf, B., Tsuda, K., & Vert, J.-P. (eds.) (2004). Kernel methods in computational biology. Cambridge: MIT Press.

    Google Scholar 

  • Seeger, M. (2003). Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. Dissertation, University of Edinburgh.

  • Snoussi, H., & Mohammad-Djafari, A. (2002). Information geometry and Prior Selection. In C. J. Williams (Ed.), Proceedings of 22nd international workshop on Bayesian inference and maximum entropy methods in science and engineering (MaxEnt’02), pp. 307–327, Moscow, Idaho, USA, August 2002.

  • Vapnik, V. (1998). Statistical learning theory. New York: Wiley.

    MATH  Google Scholar 

  • Williamson, J. (2007). ‘Philosophies of probability: Objective bayesianism and its challenges. In A. Irvine (Ed.), Handbook of the philosophy of mathematics. Elsevier, Amsterdam. Handbook of the Philosophy of Science volume 4.

Download references

Acknowledgments

The author thanks the Max Planck Society for supporting his research for this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Corfield.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Corfield, D. Varieties of Justification in Machine Learning. Minds & Machines 20, 291–301 (2010). https://doi.org/10.1007/s11023-010-9191-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9191-1

Keywords

Navigation