LLMs don't know anything: reply to Yildirim and Paul

Trends in Cognitive Sciences (forthcoming)
  Copy   BIBTEX

Abstract

In their recent Opinion in TiCS, Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LLMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of... entities and processes in the real world.' While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two grounds. First, it casts LLMs as agents rather than as models. Second, it suggests that causal understanding could be acquired from the capacity for mere prediction.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

LLMs are not just next token predictors.Alex Grzankowski, Stephen M. Downes & Partick Forber - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.

Analytics

Added to PP
2024-10-10

Downloads
448 (#64,386)

6 months
448 (#3,480)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Mariel Goddu
Stanford University
Evan Thompson
University of British Columbia
Alva Noë
University of California, Berkeley

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references