7 found
Order:
  1.  31
    Solomonoff Prediction and Occam’s Razor.Tom F. Sterkenburg - 2016 - Philosophy of Science 83 (4):459-479.
    Algorithmic information theory gives an idealized notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam’s razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  21
    Putnam's Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom F. Sterkenburg - unknown
    The diagonalization argument of Putnam denies the possibility of a universal learning machine. Yet the proposal of Solomonoff and Levin promises precisely such a thing. In this paper I discuss how their proposed measure function manages to evade Putnam's diagonalization in one respect, only to fatally fall prey to it in another.
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  20
    Universal Prediction.Tom F. Sterkenburg - 2018 - Dissertation, University of Groningen
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  24
    Deborah G. Mayo: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.Tom F. Sterkenburg - 2020 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 51 (3):507-510.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  6
    On the Truth-Convergence of Open-Minded Bayesianism.Tom F. Sterkenburg & Rianne de Heide - forthcoming - Review of Symbolic Logic:1-37.
    Wenmackers and Romeijn [38] formalize ideas going back to Shimony [33] and Putnam [28] into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  40
    The Meta-Inductive Justification of Induction.Tom F. Sterkenburg - 2020 - Episteme 17 (4):519-541.
    ABSTRACTI evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice.My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge.The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need to spell (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  7.  2
    The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - forthcoming - Synthese:1-37.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    No categories
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark