Fracking our humanity

Journal of Medical Ethics 49 (3):181-182 (2023)
  Copy   BIBTEX

Abstract

Nietzche claimed that once we know why to live, we’ll suffer almost any how.1 Artificial intelligence (AI) is used widely for the how, but Ferrario et al now advocate using AI for the why.2 Here, I offer my doubts on practical grounds but foremost on ethical ones. Practically, individuals already vacillate over the why, wavering with time and circumstance. That AI could provide prosthetics (or orthotics) for human agency feels unrealistic here, not least because ‘answers’ would be largely unverifiable. Ethically, the concern is that AI stands to frack our humanity. We form a fragile ecosystem of ethical subjects, our responsiveness to others’ suffering, enabled by our own. To deliberate together for incapacitated others is among those solemn privileges that verify our humanity. Having AI mine these delicate pain-forests risks treating our suffering as the new oil—to be extracted and exploited, but beyond our vision and at our cost. Let’s briefly develop each idea, starting with the how/why distinction. This is palpable, even for more prosaic questions like how or why to drive. The former admits of increasingly sophisticated technological fixes and nudge; the latter often remains very particular and personal. How much greater then, the difference between …

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 98,316

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-01-14

Downloads
21 (#871,614)

6 months
10 (#315,460)

Historical graph of downloads
How can I increase my downloads?