Abstract
Nietzche claimed that once we know why to live, we’ll suffer almost any how.1 Artificial intelligence (AI) is used widely for the how, but Ferrario et al now advocate using AI for the why.2 Here, I offer my doubts on practical grounds but foremost on ethical ones. Practically, individuals already vacillate over the why, wavering with time and circumstance. That AI could provide prosthetics (or orthotics) for human agency feels unrealistic here, not least because ‘answers’ would be largely unverifiable. Ethically, the concern is that AI stands to frack our humanity. We form a fragile ecosystem of ethical subjects, our responsiveness to others’ suffering, enabled by our own. To deliberate together for incapacitated others is among those solemn privileges that verify our humanity. Having AI mine these delicate pain-forests risks treating our suffering as the new oil—to be extracted and exploited, but beyond our vision and at our cost. Let’s briefly develop each idea, starting with the how/why distinction. This is palpable, even for more prosaic questions like how or why to drive. The former admits of increasingly sophisticated technological fixes and nudge; the latter often remains very particular and personal. How much greater then, the difference between …