Abstract
Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for being trustworthy and further suggest a network account of trustworthy AI, which retains the goodwill as the essential requirement for the concept of trustworthiness and meanwhile predicts that current AI can be trustworthy. The alternative account suggests that the focus of trustworthy AI is not merely AI technology, but the whole network of AI involving AI technology, AI designers, AI companies, and other social and legal institutions. A trustworthy AI requires that AI technology is reliable and that other involved agents are trustworthy.
Similar content being viewed by others
Notes
Rather than using an alarm clock, I may trust the family member to wake me in time to prepare for the job interview.
Thanks to the anonymous reviewer’s comment on this point.
It is worth nothing that according to ANT, every element in the network is equally important (Latour 2007, 104). However, it might not always be the case that every element in a network of AI is equally important for a trustworthy AI. It might depend on the nature of the particular AI. For example, for some supporting AI (such as dose AI), a trustworthy physician and a reliable AI algorithm are equally important in determining if we should trust in that AI. If AI algorithm is reliable, but the physicians are in general not well trained to interpret or evaluate the results, then we should not trust that AI. In contrast, in a recommendation AI, whether the users are well trained to interpret the results might be less important than a reliable algorithm.
Durán and Jongsma argue that under the computational reliabilism, we can rely on an opaque algorithm that offers correct results most of the time(see Durán and Jongsma 2021)
Alfano et al. argues that the YouTube recommendation AI has implication of transforming users to conspiracy theorists (see Alfano et al. 2021). Carter (2023) also comments that Simion and Kelp’s account fails to address AI, such as YouTube’s recommendation system, which has reliable AI algorithm but plays a role in the recommendation of conspiratorial content.
Thanks to the anonymous review’s comment.
References
Alfano, M., Fard, A. E., Carter, J. A., Clutton, P., & Klein, C. (2021). Technologically scaffolded atypical cognition: the case of YouTube’s recommender system. Synthese, 199(1), 835–858. https://doi.org/10.1007/s11229-020-02724-x
Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.
Braun, M., Bleher, H., & Hummel, P. (2021). A leap of faith: is there a formula for “trustworthy” AI? Hastings Center Report, 51(3), 17–22. https://doi.org/10.1002/hast.1207
Carter, J. A. (2023). Simion and Kelp on trustworthy AI. Asian Journal of Philosophy, 2, 18. https://doi.org/10.1007/s44204-023-00067-1
Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335. https://doi.org/10.1136/medethics-2020-106820
Goldberg, S. C. (2020). Trust and reliance In The Routledge handbook of trust and philosophy. Routledge.
IHEGAI (Independent High-Level Expert Group on Artificial Intelligence). (2019). Ethics guidelines for trustworthy AI. Brussels: European Commission Retrieved May 21 2023 from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
Jones, K. (1996). Trust as an Affective Attitude. Ethics, 107(1), 4–25. http://www.jstor.org/stable/2382241.
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., Chang, S., Berkowitz, S., Finn, A., Jahangir, E., Scoville, E., Reese, T., Friedman, D., Bastarache, J., van der Heijden, Y., Wright, J., Carter, N., Alexander, M., Choe, J., & Wheless, L. (2023). Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the chat-GPT model. Research Square, 3, 2566942. https://doi.org/10.21203/rs.3.rs-2566942/v1
Kelp, C., & Simion, M. (2023). What is trustworthiness? In Noûs. https://doi.org/10.1111/nous.12448
Latour, B. (2007). Reassembling the social: an introduction to actor-network-theory. Oxford University Press.
McLeod, C. (2020). Trust. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy.
Potter, N. N. (2002). How can i be trusted?: a virtue theory of trustworthiness. Rowman & Littlefield Publishers.
Ryan, M. (2020). In AI we trust: ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
Ryan, S. (2018). Trust: a recipe. Think, 17(50), 113–125.
Shen, X., Chen, Z., Backes, M., & Zhang, Y. (2023). In ChatGPT we trust? Measuring and characterizing the reliability of ChatGPT (arXiv:2304.08979). arXiv. https://doi.org/10.48550/arXiv.2304.08979
Simion, M., & Kelp, C. (2023). Trustworthy artificial intelligence. Asian Journal of Philosophy, 2(1), 1–12.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Song, F. Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI. AJPH 2, 58 (2023). https://doi.org/10.1007/s44204-023-00108-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44204-023-00108-9