Grounding the Vector Space of an Octopus: Word Meaning from Raw Text

Minds and Machines 33 (1):33-54 (2023)
  Copy   BIBTEX

Abstract

Most, if not all, philosophers agree that computers cannot learn what words refers to from raw text alone. While many attacked Searle’s Chinese Room thought experiment, no one seemed to question this most basic assumption. For how can computers learn something that is not in the data? Emily Bender and Alexander Koller ( 2020 ) recently presented a related thought experiment—the so-called Octopus thought experiment, which replaces the rule-based interlocutor of Searle’s thought experiment with a neural language model. The Octopus thought experiment was awarded a best paper prize and was widely debated in the AI community. Again, however, even its fiercest opponents accepted the premise that what a word refers to cannot be induced in the absence of direct supervision. I will argue that what a word refers to _is_ probably learnable from raw text alone. Here’s why: higher-order concept co-occurrence statistics are stable across languages and across modalities, because language use (universally) reflects the world we live in (which is relatively stable). Such statistics are sufficient to establish what words refer to. My conjecture is supported by a literature survey, a thought experiment, and an actual experiment.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Errata.[author unknown] - 1999 - Minds and Machines 9 (3):457-457.
Erratum.[author unknown] - 2004 - Minds and Machines 14 (2):279-279.
Call for papers.[author unknown] - 1999 - Minds and Machines 9 (3):459-459.
Editor’s Note.[author unknown] - 2003 - Minds and Machines 13 (3):337-337.
Instructions for authors.[author unknown] - 1998 - Minds and Machines 8 (4):587-590.
Volume contents.[author unknown] - 1998 - Minds and Machines 8 (4):591-594.
Editor's Note.[author unknown] - 2001 - Minds and Machines 11 (1):1-1.
Book Reviews. [REVIEW][author unknown] - 1997 - Minds and Machines 7 (2):289-320.
Book Reviews. [REVIEW][author unknown] - 2004 - Minds and Machines 14 (2):241-278.
Book Reviews. [REVIEW][author unknown] - 1997 - Minds and Machines 7 (1):115-155.
Erratum.[author unknown] - 1997 - Journal of Applied Non-Classical Logics 7 (3):473-473.
Correction to: What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):339-339.

Analytics

Added to PP
2023-01-30

Downloads
34 (#443,903)

6 months
14 (#151,397)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Anders Søgaard
University of Copenhagen

Citations of this work

Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.

Add more citations

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
The Rediscovery of the Mind.John Searle - 1992 - Philosophy and Phenomenological Research 55 (1):201-207.
Could a machine think?Paul M. Churchland & Patricia S. Churchland - 1990 - Scientific American 262 (1):32-37.
Matter and Memory.Henri Bergson, Nancy Margaret Paul & W. Scott Palmer - 1911 - International Journal of Ethics 22 (1):101-107.

View all 12 references / Add more references