Switch to: References

Citations of:

Moral Status and Agent-Centred Options

Utilitas 31 (1):83-105 (2019)

Add citations

You must login to add citations.
  1. Moral Uncertainty, Pure Justifiers, and Agent-Centred Options.Patrick Kaczmarek & Harry R. Lloyd - forthcoming - Australasian Journal of Philosophy.
    Moral latitude is only ever a matter of coincidence on the most popular decision procedure in the literature on moral uncertainty. In all possible choice situations other than those in which two or more options happen to be tied for maximal expected choiceworthiness, Maximize Expected Choiceworthiness implies that only one possible option is uniquely appropriate. A better theory of appropriateness would be more sensitive to the decision maker’s credence in theories that endorse agent-centred prerogatives. In this paper, we will develop (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Accommodating Options.Seth Lazar - 2018 - Pacific Philosophical Quarterly 100 (1):233-255.
    Many of us think we have agent-centred options to act suboptimally. Some of these involve favouring our own interests. Others involve sacrificing them. In this paper, I explore three different ways to accommodate agent-centred options in a criterion of objective permissibility. I argue against satisficing and rational pluralism, and in favour of a principle built around sensitivity to personal cost.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • What’s Wrong with Automated Influence.Claire Benn & Seth Lazar - 2022 - Canadian Journal of Philosophy 52 (1):125-148.
    Automated Influence is the use of Artificial Intelligence to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations