Model theory and machine learning

Bulletin of Symbolic Logic 25 (3):319-332 (2019)
  Copy   BIBTEX

Abstract

About 25 years ago, it came to light that a single combinatorial property determines both an important dividing line in model theory and machine learning. The following years saw a fruitful exchange of ideas between PAC-learning and the model theory of NIP structures. In this article, we point out a new and similar connection between model theory and machine learning, this time developing a correspondence between stability and learnability in various settings of online learning. In particular, this gives many new examples of mathematically interesting classes which are learnable in the online setting.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,322

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Inductive logic, verisimilitude, and machine learning.Ilkka Niiniluoto - 2005 - In Petr H’Ajek, Luis Vald’es-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science. London: College Publications. pp. 295/314.
Inductive learning by machines.Stuart Russell - 1991 - Philosophical Studies 64 (October):37-64.
Philosophy and machine learning.Paul Thagard - 1990 - Canadian Journal of Philosophy 20 (2):261-76.

Analytics

Added to PP
2019-02-16

Downloads
42 (#368,825)

6 months
20 (#126,042)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Model theory and combinatorics of banned sequences.Hunter Chase & James Freitag - 2022 - Journal of Symbolic Logic 87 (1):1-20.
Thicket density.Siddharth Bhaskar - 2021 - Journal of Symbolic Logic 86 (1):110-127.

Add more citations

References found in this work

Notes on the stability of separably closed fields.Carol Wood - 1979 - Journal of Symbolic Logic 44 (3):412-416.

Add more references