Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that (...) addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. -/- We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern. (shrink)
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting (...) and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world. (shrink)
This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
In February 2012, the Obama White House endorsed a Privacy Bill of Rights, comprising seven principles. The third, “Respect for Context,” is explained as the expectation that “companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” One can anticipate the contested interpretations of this principle as parties representing diverse interests vie to make theirs the authoritative one. In the paper I will discuss three possibilities and explain why (...) each does not take us far beyond the status quo, which, regulators in the United States, Europe, and beyond have found problematic. I will argue that contextual integrity offers the best way forward for protecting privacy in a world where information increasingly mediates our significant activities and relationships. Although an important goal is to influence policy, this paper aims less to stipulate explicit rules than to present an underlying justificatory, or normative rationale. Along the way, it will review key ideas in the theory of contextual integrity, its differences from existing approaches, and its harmony with basic intuition about information sharing practices and norms. (shrink)
This article highlights a contemporary privacy problem that falls outside the scope of dominant theoretical approaches. Although these approaches emphasize the connection between privacy and a protected personal (or intimate) sphere, many individuals perceive a threat to privacy in the widespread collection of information even in realms normally considered "public". In identifying and describing the problem of privacy in public, this article is preliminary work in a larger effort to map out future theoretical directions.
According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by (...) IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely. (shrink)
This paper identifies two conceptions of security in contemporary concerns over the vulnerability of computers and networks to hostile attack. One is derived from individual-focused conceptions of computer security developed in computer science and engineering. The other is informed by the concerns of national security agencies of government as well as those of corporate intellectual property owners. A comparative evaluation of these two conceptions utilizes the theoretical construct of “securitization,”developed by the Copenhagen School of International Relations.
The spread of new information and communications technologies during the past two decades has helped reshape civic associations, political communities, and global relations. In the midst of the information revolution, we find that the speed of this technology-driven change has outpaced our understanding of its social and ethical effects. The moral dimensions of this new technology and its effects on social bonds need to be questioned and scrutinized: Should the Internet be understood as a new form of public space and (...) a source of public good? What are we to make of hackers? Does the Internet strengthen or weaken community? In The Internet in Public Life, essayists confront these and other important questions. This timely and necessary volume makes clear the need for a broader conversation about the effects of the Internet, and the questions raised by these seven essays highlight some of the most pressing issues at hand. (shrink)