Assessing the Strengths and Weaknesses of Large Language Models

Journal of Logic, Language and Information 33 (1):9-20 (2023)
  Copy   BIBTEX

Abstract

The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature and the popular media. This discussion ranges from hyperbolic claims that attribute general intelligence and sentience to LLMs, to the skeptical view that these devices are no more than “stochastic parrots”. I present an overview of some of the weak arguments that have been presented against LLMs, and I consider several of the more compelling criticisms of these devices. The former significantly underestimate the capacity of transformers to achieve subtle inductive inferences required for high levels of performance on complex, cognitively significant tasks. In some instances, these arguments misconstrue the nature of deep learning. The latter criticisms identify significant limitations in the way in which transformers learn and represent patterns in data. They also point out important differences between the procedures through which deep neural networks and humans acquire knowledge of natural language. It is necessary to look carefully at both sets of arguments in order to achieve a balanced assessment of the potential and the limitations of LLMs.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,503

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

25th Workshop on Logic, Language, Information and Computation: WoLLIC 2018.Lawrence Moss & Ruy de Queiroz - 2022 - Journal of Logic, Language and Information 31 (4):525-527.
Instructions for Authors.[author unknown] - 2003 - Journal of Logic, Language and Information 12 (1):119-125.
Call for Papers.[author unknown] - 1999 - Journal of Logic, Language and Information 8 (1):135-136.
Instructions for Authors.[author unknown] - 2004 - Journal of Logic, Language and Information 13 (4):541-546.
Call for Papers.[author unknown] - 1999 - Journal of Logic, Language and Information 8 (3):399-400.
Instructions for Authors.[author unknown] - 2001 - Journal of Logic, Language and Information 10 (4):531-537.
Instructions for Authors.[author unknown] - 2002 - Journal of Logic, Language and Information 11 (4):523-529.
Call for Papers.[author unknown] - 1998 - Journal of Logic, Language and Information 7 (4):519-520.
Contents of Volume 10.[author unknown] - 2001 - Journal of Logic, Language and Information 10 (4):527-529.
Contents of Volume 11.[author unknown] - 2004 - Journal of Logic, Language and Information 11 (4):521-522.
Contents of Volume 7.[author unknown] - 2004 - Journal of Logic, Language and Information 7 (4):509-511.
Instructions for Authors.[author unknown] - 1998 - Journal of Logic, Language and Information 7 (4):513-518.
Note from the Editor.[author unknown] - 1999 - Journal of Logic, Language and Information 8 (2):3-3.
Instructions for Authors.[author unknown] - 2000 - Journal of Logic, Language and Information 9 (4):525-531.
Instructions for Authors.[author unknown] - 2004 - Journal of Logic, Language and Information 13 (1):111-116.

Analytics

Added to PP
2023-11-12

Downloads
32 (#495,286)

6 months
32 (#103,580)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Truth and meaning.Donald Davidson - 1967 - Synthese 17 (1):304-323.
Truth and meaning.Donald Davidson - 1967 - Synthese 17 (1):304-323.
Formal Philosophy: Selected Papers of Richard Montague.Richard Montague & Richmond H. Thomason - 1978 - British Journal for the Philosophy of Science 29 (2):197-201.

View all 6 references / Add more references