Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Doman-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...) case for computer moral agency: whether they can have the kinds of intentional states that cause their decisions and actions. I give an account that builds on traditional action theory and allows us to conceive of computers as genuine moral agents in virtue of their own causally efficacious intentional states. These states can cause harm or benefit to moral patients, but do not depend on computer consciousness or intelligence. (shrink)
Thomas M. Powers (2009). Preface. In Jinfen Yan & David E. Schrader (eds.), Creating a Global Dialogue on Value Inquiry: Papers From the Xxii Congress of Philosophy (Rethinking Philosophy Today). Edwin Mellen Press.
In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...) and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA. (shrink)
Beginning with the well-knowncyber-rape in LambdaMOO, I argue that it ispossible to have real moral wrongs in virtualcommunities. I then generalize the account toshow how it applies to interactions in gamingand discussion communities. My account issupported by a view of moral realism thatacknowledges entities like intentions andcausal properties of actions. Austin's speechact theory is used to show that real people canact in virtual communities in ways that bothestablish practices and moral expectations, andwarrant strong identifications betweenthemselves and their online identities. Rawls'conception (...) of a social practice is used toanalyze the nature of the wrong and thestage-setting aspect of engaging in a practice. (shrink)