A new view of the functional role of the left anterior cortex in language use is proposed. The experimental record indicates that most human linguistic abilities are not localized in this region. In particular, most of syntax (long thought to be there) is not located in Broca's area and its vicinity (operculum, insula, and subjacent white matter). This cerebral region, implicated in Broca's aphasia, does have a role in syntactic processing, but a highly specific one: It is the neural home (...) to receptive mechanisms involved in the computation of the relation between transformationally moved phrasal constituents and their extraction sites (in line with the Trace-Deletion Hypothesis). It is also involved in the construction of higher parts of the syntactic tree in speech production. By contrast, basic combinatorial capacities necessary for language processing – for example, structure-building operations, lexical insertion – are not supported by the neural tissue of this cerebral region, nor is lexical or combinatorial semantics. The dense body of empirical evidence supporting this restrictive view comes mainly from several angles on lesion studies of syntax in agrammatic Broca's aphasia. Five empirical arguments are presented: experiments in sentence comprehension, cross-linguistic considerations (where aphasia findings from several language types are pooled and scrutinized comparatively), grammaticality and plausibility judgments, real-time processing of complex sentences, and rehabilitation. Also discussed are recent results from functional neuroimaging and from structured observations on speech production of Broca's aphasics. Syntactic abilities are nonetheless distinct from other cognitive skills and are represented entirely and exclusively in the left cerebral hemisphere. Although more widespread in the left hemisphere than previously thought, they are clearly distinct from other human combinatorial and intellectual abilities. The neurological record (based on functional imaging, split-brain and right-hemisphere-damaged patients, as well as patients suffering from a breakdown of mathematical skills) indicates that language is a distinct, modularly organized neurological entity. Combinatorial aspects of the language faculty reside in the human left cerebral hemisphere, but only the transformational component (or algorithms that implement it in use) is located in and around Broca's area. Key Words: agrammatism; aphasia; Broca's area; cerebral localization; dyscalculia; functional neuroanatomy; grammatical transformation; modularity; neuroimaging; syntax; trace deletion. (shrink)
There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...) this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...) difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)
In this paper, we examine some ethical implications of a controversial court decision in the United States involving Verizon (an Internet Service Provider or ISP) and the Recording Industry Association of America (RIAA). In particular, we analyze the impacts this decision has for personal privacy and intellectual property. We begin with a brief description of the controversies and rulings in this case. This is followed by a look at some of the challenges that peer-to-peer (P2P) systems, used to share digital (...) information, pose for our legal and moral systems. We then examine the concept of privacy to better understand how the privacy of Internet users participating in P2P file-sharing practices is threatened under certain interpretations of the Digital Millennium Copyright Act (DMCA) in the United States. In particular, we examine the implications of this act for a new form of “panoptic surveillance” that can be carried out by organizations such as the RIAA. We next consider the tension between privacy and property-right interests that emerges in the Verizon case, and we examine a model proposed by Jessica Litman for distributing information over the Internet in a way that respects both privacy and property rights. We conclude by arguing that in the Verizon case, we should presume in favor of privacy as the default position, and we defend the view that a presumption should be made in favor of sharing (rather than hoarding) digital information. We also conclude that in the Verizon case, a presumption in favor of property would have undesirable effects and would further legitimize the commodification of digital information – a recent trend that is reinforced by certain interpretations of the DMCA on the part of lawmakers and by aggressive tactics used by the RIAA. (shrink)
This essay examines some ethical aspects of stalkingincidents in cyberspace. Particular attention is focused on the Amy Boyer/Liam Youens case of cyberstalking, which has raised a number of controversial ethical questions. We limit our analysis to three issues involving this particular case. First, we suggest that the privacy of stalking victims is threatened because of the unrestricted access to on-linepersonal information, including on-line public records, currently available to stalkers. Second, we consider issues involving moral responsibility and legal liability for Internet (...) service providers (ISPs) when stalking crimesoccur in their `space' on the Internet. Finally, we examine issues of moral responsibility for ordinary Internet users to determine whether they are obligated to inform persons whom they discover to be the targets of cyberstalkers. (shrink)
To many who develop and use free software, the GNU General Public License represents an embodiment of the meaning of free software. In this paper we examine the definition and meaning of free software in the context of three events surrounding the GNU General Public License. We use a case involving the GPU software project to establish the importance of Freedom 0 in the meaning of free software. We analyze version 3 of the GNU General Public License and conclude that (...) although a credible case can be made that the added restrictions are consistent with the definition of free software, the case requires subtle arguments. Strong arguments against the added restrictions are less subtle, and may therefore be more convincing to many users and developers. We also analyze the Affero General Public License and conclude that it is inconsistent with the definition of free software. (shrink)
I present the scope andcharacteristics of Marx''s interest in Russiaand review its evolution. Initially, Marx''sattitudes were marked by russophobia,pronounced anti-panslavism, assessments ofRussia as an outpost of European reaction andcounterrevolution, and even as the head of aconspiracy to block the world revolution. Withtime, however, Marx came to consider Russia asthe country in which the outbreak of theRevolution was most likely. In his research forsucessive volumes of Capital, he readRussian theoretical works by, among others, V.Bervi-Flerovskij and A. Koshelev. Marx''sattitudes to the anticipated (...) peasant revolutionin Russia remained ambivalent; to a certaindegree he feared its occurrence suspecting thatit could take on an `asiatic'' hue. (shrink)
In this age of information technology, it is morally imperative that equal access to information via computer systems be afforded to people with disabilities. This paper addresses the problems that computer technology poses for students with disabilities and discusses what is needed to ensure equity of access. particularly in a university environment.
I begin with a characterization of neurolinguistic theories, trying to pinpoint some general properties that an account of brain/language relations should have. I then address specific criticisms made in the commentaries regarding the syntactic theory assumed in the target article, properties of the Trace Deletion Hypothesis (TDH) and the Tree-Pruning Hyothesis (TPH), other experimental results from aphasia, and findings from functional neuroimaging. Despite the criticism, the picture of the limited role of Broca's area remains unchanged.
This paper applies social-relational models of moral standing of robots to cases where the encounters between the robot and humans are relatively brief. Our analysis spans the spectrum of non-social robots to fully-social robots. We consider cases where the encounters are between a stranger and the robot and do not include its owner or operator. We conclude that the developers of robots that might be encountered by other people when the owner is not present cannot wash their hands of responsibility. (...) They must take care with how they develop the robot’s interface with people and take into account how that interface influences the social relationship between it and people, and, thus, the moral standing of the robot with each person it encounters. Furthermore, we claim that developers have responsibility for the impact social robots have on the quality of human social relationships. (shrink)