Information, learning and falsification

Authors
David Balduzzi
University of Zürich
Abstract
There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmic information to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information.
Keywords falsification  information theory  statistical learning theory  kolmogorov complexity  induction  falsifiability  Popper
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Quantum Information Does Not Exist.Armond Duwell - 2003 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (3):479-499.
Intrinsic Information.John D. Collier - 1990 - In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press. pp. 1--390.
The Transmission Sense of Information.Carl T. Bergstrom & Martin Rosvall - 2011 - Biology and Philosophy 26 (2):159-176.
The Informational Turn in Philosophy.Frederick R. Adams - 2003 - Minds and Machines 13 (4):471-501.
On a Supposed Conceptual Inadequacy of the Shannon Information in Quantum Mechanics.G. C. - 2003 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (3):441-468.

Analytics

Added to PP index
2013-09-26

Total views
102 ( #65,776 of 2,312,744 )

Recent downloads (6 months)
10 ( #61,818 of 2,312,744 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature