Skip to main content
Log in

Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI

  • Book Symposium
  • Published:
Asian Journal of Philosophy Aims and scope Submit manuscript

Abstract

Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for being trustworthy and further suggest a network account of trustworthy AI, which retains the goodwill as the essential requirement for the concept of trustworthiness and meanwhile predicts that current AI can be trustworthy. The alternative account suggests that the focus of trustworthy AI is not merely AI technology, but the whole network of AI involving AI technology, AI designers, AI companies, and other social and legal institutions. A trustworthy AI requires that AI technology is reliable and that other involved agents are trustworthy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Rather than using an alarm clock, I may trust the family member to wake me in time to prepare for the job interview.

  2. Thanks to the anonymous reviewer’s comment on this point.

  3. Confusingly, Simion and Kelp (2023, 8) use reliance and reliability interchangeably, including in their discussion of the work of Baier (1986), who never uses the word reliability, and Potter (2002) who rarely uses it.

  4. It is worth nothing that according to ANT, every element in the network is equally important (Latour 2007, 104). However, it might not always be the case that every element in a network of AI is equally important for a trustworthy AI. It might depend on the nature of the particular AI. For example, for some supporting AI (such as dose AI), a trustworthy physician and a reliable AI algorithm are equally important in determining if we should trust in that AI. If AI algorithm is reliable, but the physicians are in general not well trained to interpret or evaluate the results, then we should not trust that AI. In contrast, in a recommendation AI, whether the users are well trained to interpret the results might be less important than a reliable algorithm.

  5. Durán and Jongsma argue that under the computational reliabilism, we can rely on an opaque algorithm that offers correct results most of the time(see Durán and Jongsma 2021)

  6. Alfano et al. argues that the YouTube recommendation AI has implication of transforming users to conspiracy theorists (see Alfano et al. 2021). Carter (2023) also comments that Simion and Kelp’s account fails to address AI, such as YouTube’s recommendation system, which has reliable AI algorithm but plays a role in the recommendation of conspiratorial content.

  7. Thanks to the anonymous review’s comment.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fei Song.

Ethics declarations

Competing interests

The author declares no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, F. Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI. AJPH 2, 58 (2023). https://doi.org/10.1007/s44204-023-00108-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s44204-023-00108-9

Keywords

Navigation