Something AI Should Tell You – The Case for Labelling Synthetic Content

Journal of Applied Philosophy 42 (1):272-286 (2025)
  Copy   BIBTEX

Abstract

Synthetic content, which has been produced by generative artificial intelligence, is beginning to spread through the public sphere. Increasingly, we find ourselves exposed to convincing ‘deepfakes’ and powerful chatbots in our online environments. How should we mitigate the emerging risks to individuals and society? This article argues that labelling synthetic content in public forums is an essential first step. While calls for labelling have already been growing in volume, no principled argument has yet been offered to justify this measure (which inevitably comes with some additional costs). Rectifying that deficit, I conduct a close examination of our epistemic and expressive interests in identifying synthetic content as such. In so doing, I develop a cumulative case for social media platforms to enforce a labelling duty. I argue that this represents an important element of good platform governance, helping to shore up the integrity of our contemporary public discourse, which takes place increasingly online.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 107,751

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2024-08-25

Downloads
41 (#654,116)

6 months
28 (#135,585)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Sarah A Fisher
Cardiff University