Artificial understanding: a step toward robust AI

AI and Society:1-13 (forthcoming)
  Copy   BIBTEX

Abstract

In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and common sense in order to reach or at least advance toward general intelligence. Some believe that scaling will eventually bring about these capabilities; others think that a different architecture is needed. In this paper, we focus on the latter, with the purpose of integrating a theoretical–philosophical conception of understanding as knowledge of dependence relations, with the high-level requirements and engineering design of a robust AI system, which integrates machine learning and symbolic components.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,642

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-03-17

Downloads
13 (#288,494)

6 months
39 (#395,476)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Calibrating machine behavior: a challenge for AI alignment.Erez Firt - 2023 - Ethics and Information Technology 25 (3):1-8.

Add more citations

References found in this work

Understanding.Stephen Grimm - 2011 - In D. Pritchard S. Berneker (ed.), The Routledge Companion to Epistemology. Routledge.
Inference to the Best Explanation.Jonathan Vogel - 1993 - Philosophical Review 102 (3):419.

View all 7 references / Add more references