A growing amount of media is paid for by its consumers through their very consumption of it. Typically, this new media is web-based and paid for by advertising. It includes the services offered by Facebook, Instagram, Snapchat, and YouTube. We offer an ethical assessment of the attention economy, the market where attention is exchanged for new media. We argue that the assessment has ethical implications for how the attention economy should be regulated. To conduct the assessment, we employ two heuristics (...) for evaluating markets. One is the “harm” criterion, which relates to whether the market tends to engender extremely harmful outcomes for individuals or society as a whole. The other is the “agency” criterion, which relates not to the outcomes of the market, but rather, to whether it somehow reflects or has its source in weakened agency. We argue that the attention economy animates concerns with respect to both criteria and that new media should be subject to the same sort of regulation as other harmful, addictive products. (shrink)
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. (shrink)
Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these (...) case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core. (shrink)
ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). -/- In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government to exercise (...) power, may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy. Political legitimacy is another way in which autonomy and responsibility are linked. This relationship is the basis of the current chapter, and it is important in understanding the moral salience of algorithmic systems. We will draw the connection as follows. We begin, in section 8.1, by describing two uses of technology: crime predicting technology used to drive policing practices and social media technology used to influence elections (including by Cambridge Analytica and by the Internet Research Agency). In section 8.2 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter. In section 8.3 we will explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems—among them PredPol and the Cambridge Analytica-Facebook-Internet Research Agency amalgam—can hinder that legitimation process and conflict with democratic legitimacy, as we argue in section 8.4. We will conclude by returning to several cases that serve as through-lines to the book: Loomis, Wagner, and Houston Schools. -/- The link below is to an open-access copy of the chapter. (shrink)
Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in (...) which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. Three of these involve failures to treat individual agents with the respect they deserve. The fourth involves distancing oneself from a morally suspect action by attributing one’s decision to take that action to an algorithm, thereby laundering one’s agency. (shrink)
New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epistemic bubbles. We examine one such alteration that Facebook has made in (...) an effort to fight fake news and conclude that it is morally permissible. We further argue that many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment. (shrink)
Psychometrics firms such as Cambridge Analytica (CA) and troll factories such as the Internet Research Agency (IRA) have had a significant effect on democratic politics, through narrow targeting of political advertising (CA) and concerted disinformation campaigns on social media (IRA) (U.S. Department of Justice 2019; Select Committee on Intelligence, United States Senate 2019; DiResta et al. 2019). It is natural to think that such activities manipulate individuals and, hence, are wrong. Yet, as some recent cases illustrate, the moral concerns with (...) these activities cannot be reduced simply to the effects they have on individuals. Rather, we will argue, the wrongness of these activities relates to the threats they present to the legitimacy of political orders. This occurs primarily through a mechanism we call “emergent manipulation,” rather than through the sort of manipulation that involves specific individuals. (shrink)
We offer an ethical assessment of the market for data used to generate what are sometimes called “consumer scores” (i.e., numerical expressions that are used to describe or predict people’s dispositions and behavior), and we argue that the assessment has ethical implications on how the market for consumer scoring data should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market produces serious harms, either for participants (...) in the market, for third parties, or for society as a whole. The other is the “agency” criterion, which relates to whether participants understand the nature and significance of the exchanges they are making, if they can be guaranteed fair representation, or if there is differential need for the market’s good. We argue that consumer scoring data should be subject to the same sort of regulation as the older FICO credit scores. Although the movement in the 1990s that was aimed at regulating the FICO scores was not aimed at restraining a market per se, we argue that the reforms were underwritten by concerns about the same sorts of problems as those outlined by our heuristics. Therefore, consumer data should be subject to the same sort of regulation. (shrink)
In this paper, I compare the methodology of the Austrian school to two alternative methodologies from the economic mainstream: the ‘orthodox’ and revealed preference methodologies. I argue that Austrian school theorists should stop describing themselves as ‘extreme apriorists’ (or writing suggestively to that effect), and should start giving greater acknowledgement to the importance of empirical work within their research program. The motivation for this dialectical shift is threefold: the approach is more faithful to their actual practices, it better illustrates the (...) underlying similarities between the mainstream and Austrian research paradigms, and it provides a philosophical foundation that is much more plausible in itself. (shrink)
This paper offers some refinements to a particular objection to act consequentialism, the “causal impotence” objection. According to proponents of the objection, when we find circumstances in which severe, unnecessary harms result entirely from voluntary acts, it seems as if we should be able to indict at least one act among those acts, but act consequentialism appears to lack the resources to offer this indictment. Our aim is to show is that the most promising response on behalf of act consequentialism, (...) the threshold argument, cannot offer a fully general prescription about what to do in cases of collective action. (shrink)
This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019.
ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral for respecting (...) autonomy, and transparency policies should be tailored to advance autonomy. -/- To make this argument we distinguish two facets of agency (i.e., capacity to act). The first is practical agency, or the ability to act effectively according to one’s values. The second is what we call cognitive agency, which is the ability to exercise what Pamela Hieronymi calls “evaluative control” (i.e., the ability to control our affective states, such as beliefs, desires, and attitudes). We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to key cases involving risk assessment in criminal justice decisions and K-12 teacher evaluation. -/- The link below is to an open access version of the chapter. (shrink)
We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making.