Can AI become an Expert?

Journal of Ai Humanities 16 (4):113-136 (2024)
  Copy   BIBTEX

Abstract

With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human experts. First, I will narrow the scope by proposing the definition of an expert. Along the way, three normative components of experts -trust, explainability, and responsibility-will be presented. Subsequently, I will suggest why AI cannot become a trustee, successfully transmit knowledge, or take responsibility. Specifically, the arguments focus on how these factors regulate expert judgments, which are not made in isolation but within complex social connections and spontaneous dialogue. Finally, I will defend the plausibility of the presented criteria in response to a potential objection, the claim that some machine learning-based algorithms, such as AlphaGo, have already been recognized as experts.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-10-20

Downloads
112 (#188,159)

6 months
112 (#48,042)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Hyeongyun Kim
University of Iowa

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references