In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing userinterfaces. Methodological implications for knowledge engineering and for human-computerinteraction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computerinterfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting (...) with the computer.We show that in a first step it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, userinterfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computerinteraction. (shrink)
In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided (...) by Hart and Hart and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA. (shrink)
Just as AI has moved away from classical AI, human-computerinteraction (HCI) must move away from what I call ‘good old fashioned HCI’ to ‘new HCI’ – it must become a part of cognitive systems research where HCI is one case of the interaction of intelligent agents (we now know that interaction is essential for intelligent agents anyway). For such interaction, we cannot just ‘analyze the data’, but we must assume intentions in the other, (...) and I suggest these are largely recognized through resistance to carrying out one’s own intentions. This does not require fully cognitive agents but can start at a very basic level. New HCI integrates into cognitive systems research and designs intentional systems that provide resistance to the human agent. (shrink)
The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to strain their conceptual resources, and CE requires a conceptual foundation as an ethical theory. Information Ethics (IE), the philosophical foundational counterpart of CE, can be seen as a particular case of environmental ethics or ethics of the infosphere. What is good for an information entity and the infosphere in general? This is the (...) ethical question asked by IE. The answer is provided by a minimalist theory of deseerts: IE argues that there is something more elementary and fundamental than life and pain, namely being, understood as information, and entropy, and that any information entity is to be recognised as the centre of a minimal moral claim, which deserves recognition and should help to regulate the implementation of any information process involving it. IE can provide a valuable perspective from which to approach, with insight and adequate discernment, not only moral problems in CE, but also the whole range of conceptual and moral phenomena that form the ethical discourse. (shrink)
Human-computerinteraction today has got a touch of magic: Without understanding the causal coherence, using a computer seems to become the art to use the right spell with the mouse as the magic wand â the sorcerer's staff. Goethes's poem admits an allegoric interpretation. We explicate the analogy between using a computer and casting a spell with emphasis on teaching magic skills. The art to create an ergonomic user interface has to take care of (...) various levels of skills for the human operators. The problem of logical discontinuities as opposed to continuous control is the most serious obstacle for human-computerinteraction. (shrink)
This essay considers methodological aspects ofcomputer ethics and argues for a multi-levelinterdisciplinary approach with a central role forwhat is called disclosive computer ethics. Disclosivecomputer ethics is concerned with the moraldeciphering of embedded values and norms in computersystems, applications and practices. In themethodology for computer ethics research proposed inthe essay, research takes place at three levels: thedisclosure level, in which ideally philosophers,computer scientists and social scientists collaborateto disclose embedded normativity in computer systemsand practices, the theoretical level, in (...) whichphilosophers develop and modify moral theory, and theapplication level, that draws from research performedat the other two levels, and at which normativeevaluations of computer systems and practices takesplace. (shrink)
This paper examines the study of computer basedperformance monitoring (CBPM) in the workplaceas an issue dominated by questions of ethics.Its central contention paper is that anyinvestigation of ethical monitoring practice isinadequate if it simply applies best practiceguidelines to any one context to indicate,whether practice is, on balance, ethical or not. The broader social dynamics of access toprocedural and distributive justice examinedthrough a fine grained approach to the study ofworkplace social relations, and workplaceidentity construction, are also important here. This has (...) three implications, which are examinedin the paper, and are as follows: First, thatit is vital for any empirical investigation ofthe ethics of CBPM practice to take intoaccount not only its compliance withpreexisting best practice guidelines, butalso the social relations which pervade thecontext of its application. Second, that thisnecessitates a particular epistemologicaltreatment of CBPM as something whose effectsare measurable and identifiable, as well assomething which has a socially constructedmeaning and is tropic in nature. Third, thatexisting debates against which this argument isset, which regard contrasting epistemologiesand ontologies as incompatible, should beaddressed, and an alternative introduced. Introducing situated knowledges (Haraway 1991)and material semiotic ontologies as such analternative, the paper proceeds to analyse theethics of a particular case of monitoringpractice, Norco. Drawing on Marx (1998) thepaper concludes that a fine grain analysis ofthe social is vital if we are to understandfully the ethics of monitoring in theworkplace. (shrink)
A number of different uniquenessclaims have been made about computer ethics inorder to justify characterizing it as adistinct subdiscipline of applied ethics. Iconsider several different interpretations ofthese claims and argue, first, that none areplausible and, second, that none provideadequate justification for characterizingcomputer ethics as a distinct subdiscipline ofapplied ethics. Even so, I argue that computerethics shares certain important characteristicswith medical ethics that justifies treatingboth as separate subdisciplines of appliedethics.
To what extent should humans transfer, or abdicate, responsibility to computers? In this paper, I distinguish six different senses of responsible and then consider in which of these senses computers can, and in which they cannot, be said to be responsible for deciding various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater control of our lives: (i) as finite and fallible human beings, there is a limit to how far we can (...) acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer''s larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish background-oversight responsibility. They must be prepared, at least periodically, to review whether the expertise to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves. (shrink)
The issue on the role of users in knowledge-based systems can be investigated from two aspects: the design aspect and the functionality aspect. Participatory design is an important approach for the first aspect while system adaptability supported by user modelling is crucial to the second aspect. In the article, we discuss the second aspect. We view a knowledge-based computer system as the partner of users' problem-solving process, and we argue that the system functionality can be enhanced by adapting (...) the behaviour of the system to fit the needs of users with different profiles. We emphasise that the notion of user modelling is crucial to realise such kind of flexibility. User modelling will be beneficial to the user, not only through adaptive interfaces, but also through the enhanced system adaptability. In a knowledge-based system, by incorporating user models, searching can be reduced to a smaller portion in the knowledge-base, thus enhancing system functionality. In other words, user modelling is incorporated to realise flexible inference control to achieve system adaptability. An example is provided, and a general conceptual model is sketched. We conclude this paper by emphasising that the design aspect and functionality aspect are complementary. Achieving enhanced functionality through joint efforts of computers and human users indicates a kind of participatory execution of computerised problem-solving or participatory problem-solving. (shrink)
It has become quite common for people to develop `personal'' relationships nowadays, exclusively via extensive correspondence across the Net. Friendships, even romantic love relationships, are apparently, flourishing. But what kind of relations really are possible in this way? In this paper, we focus on the case of close friendship. There are various important markers that identify a relationship as one of close friendship. One will have, for instance, strong affection for the other, a disposition to act for their well-being and (...) a desire for shared experiences. Now obviously, while all these features of friendship can gain some expression through extensive correspondence on the Net, such expression is necessarily limited –you cannot, e.g., physically embrace the other, or go on a picnic together. The issue we want to address here however, is whether there might be distinctive and important influences on the structure of interaction undertaken on the Net, that affect the kind of identity ``Net-friends'''' can develop in relation to one another. In the normal case, one develops a close friendship, and in doing so, one''s identity, in part, is shaped by the friendship. To some extent, through extensive shared experience, one comes to see aspects of the world (and of oneself) through the eyes of one''s friend and so, in part, one''s identity develops in an importantly relational way, i.e., as the product of one''s relation with the close friend. In our view, however, on account of the limits of, and/or the kind of, shared contact and experience one can have with another via correspondence on the Net, there are significant structural barriers to developing the sort of relational identity that is a feature of close friendship. In arguing our case here, and by using the case of Net ``friendship'''' as our foil, we aim to shed light on the nature and importance of certain sorts of self-expression and relational interaction found in close friendship. (shrink)
This essay addresses ethical aspects of the design and use of virtual reality (VR) systems, focusing on the behavioral options made available in such systems and the manner in which reality is represented or simulated in them. An assessment is made of the morality of immoral behavior in virtual reality, and of the virtual modeling of such behavior. Thereafter, the ethical aspects of misrepresentation and biased representation in VR applications are discussed.
Many people have a strong intuition that there is something morally objectionable about playing violent video games, particularly with increases in the number of people who are playing them and the games' alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethical theories to analyze the possibility that there might be some philosophical foundation for these intuitions. I raise the broader question of whether or not participating in authentic simulations of immoral (...) acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian has substantial objections to violent game playing, although they offer some important insights into playing games in general and what it is morally to be a ``good sport.'' The Aristotelian, however, has a plausible and intuitive way to protest participation in authentic simulations of violent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live a fulfilled eudaimonic life. (shrink)
Computer and information ethics, as well as other fields of applied ethics, need ethical theories which coherently unify deontological and consequentialist aspects of ethical analysis. The proposed theory of just consequentialism emphasizes consequences of policies within the constraints of justice. This makes just consequentialism a practical and theoretically sound approach to ethical problems of computer and information ethics.
Privacy concerns involving data mining are examined in terms of four questions: What exactly is data mining? How does data mining raise concerns for personal privacy? How do privacy concerns raised by data mining differ from those concerns introduced by traditional information-retrieval techniques in computer databases? How do privacy concerns raised by mining personal data from the Internet differ from those concerns introduced by mining such data from data warehouses? It is argued that the practice of using data-mining techniques, (...) whether on the Internet or in data warehouses, to gain information about persons raises privacy concerns that go beyond concerns introduced in traditional information-retrieval techniques in computer databases and are not covered by present data-protection guidelines and privacy laws. (shrink)
My aim in this paper is to go some way towards showing that the maintenance of hard and fast dichotomies, like those between mind and body, and the real and the virtual, is untenable, and that technological advance cannot occur with being cognisant of its reciprocal ethical implications. In their place I will present a softer enactivist ontology through which I examine the nature of our engagement with technology in general and with virtual realities in particular. This softer ontology is (...) one to which I will commit Kant, and from which, I will show, certain critical moral and emotional consequences arise. It is my contention that Kant’s logical subject is necessarily embedded in the world and that Kant, himself, would be content with this view as an expression of his inspired response to the ‘‘scandal to philosophy... that the existence of things outside us... must be accepted merely on faith’’ [Bxl]. In keeping with his arguments for the a priori framing of intuition, the a priori structuring of experience through the spontaneous application of the categories, the synthesis of the experiential manifold, and the necessity of a unity of apperception, I will present an enactivist account of agency in the world, and argue that it is our embodied and embedded kinaesthetic engagement in our world which makes possible the syntheses of apprehension, reproduction and recognition, and which, in turn, make possible the activity of the reproductive or creative imagination. (shrink)
Computing plays an important role in genetics (and vice versa).Theoretically, computing provides a conceptual model for thefunction and malfunction of our genetic machinery. Practically,contemporary computers and robots equipped with advancedalgorithms make the revelation of the complete human genomeimminent – computers are about to reveal our genetic soulsfor the first time. Ethically, computers help protect privacyby restricting access in sophisticated ways to genetic information.But the inexorable fact that computers will increasingly collect,analyze, and disseminate abundant amounts of genetic informationmade available through (...) the genetic revolution, not to mentionthat inexpensive computing devices will make genetic informationgathering easier, underscores the need for strong and immediateprivacy legislation. (shrink)
This paper considers the moralresponsibility of computer scientists withrespect to weapons development in post-911America. It does so by looking at the doctrineof jus in bello as exemplified in fourscenarios. It argues that the traditionaldoctrine should be augmented by a number ofprinciples, including the Principle of aMorally Obligatory Smart Arms Race, thePrinciple of Assistance to One's Enemies, thePrinciple of Public Debate on Weapons of MassDisruption, and the Principle of the MoralUnjustifiability of Private Wars.
The infrastructure is becoming a network of computerized machines regulated by societies of self-directing software agents. Complexity encourages the emergence of novel values in software agent societies. Interdependent human and software political orders cohabitate and coevolve in a symbiosis of freedoms.
Psychotherapy and counselling services are now available on-line, and expanding rapidly. Yet there appears almost no ethical analysis of this on-line mode of delivery of such professional services. In this paper I present such an analysis by considering the limitations on-line contact imposes on the nature of the professional–client relationship. The analysis proceeds via the contrast between the face-to-face case and the on-line case. At the core of the problem must be the recognition that on-line interaction imposes a physical (...) barrier largely permitting only those disclosures of self we choose to make available, and greatly restricting the range of involuntary features and behaviours. I show why this is problematic, first, for the development of a close professional–client relationship, with particular emphasis on such failures as diagnosis and monitoring of the patient. Second I describe the importance of the development of professional character, and of how the on-line environment fails to provide a context for such character traits to emerge and develop. (shrink)
This is a review of Hans Moravec''s book, Robot: Mere Machine to Transcendent Mind. This review raises three categories of questions relating to Moravec''s vision of the future. First, there are the ethical and social implications issues implicit in robotics research. Second, there are the soul issues, which especially relate to the prospect of the demoralization of human beings. Third, there is the issue as to whether a robot could ever be a sentient being.
The era of the Cyborg is now upon us. This has enormous implications on ethical values for both humans and cyborgs. In this paper the state of play is discussed. Routes to cyborgisation are introduced and different types of Cyborg are considered. The author's own self-experimentation projects are described as central to the theme taken. The presentation involves ethical aspects of cyborgisation both as it stands now and those which need to be investigated in the near future as the effects (...) of increased technological power have a more dramatic influence. An important feature is the potential for cyborgs to act against, rather than for, the interests of humanity. (shrink)
This article develops a critical theory of human–computerinteraction intended to test some of the assumptions and omissions made in the field as it transitions from a cognitive theoretical frame to a phenomenological understanding of user experience described by Harrison et al. as a third research paradigm and similarly Bødker :24–31; Bødker, Interactions 22):24–31, 2015) as third-wave HCI. Although this particular focus on experience has provided some novel avenues of academic enquiry, this article draws attention to (...) a distinct bridge between the conventional HCI disciplinary concerns with predominantly task-based digital work and use context and a growing business interest in consumer experiences in digital environments. Critical HCI addresses the problem of experience in two interrelated ways. On one hand, it explores the role market logic plays in putting user experiences to work. On the other hand, it engages with ontological understandings of experience hitherto realized in HCI by way of a phenomenological matrix. The article concludes by bringing in an old thinker to consider experience in novel ways that relate ontological concerns to a broader political concept of experience capitalism. (shrink)
Anonymity is a form of nonidentifiability which I define as noncoordinatability of traits in a given respect. This definition broadens the concept, freeing it from its primary association with naming. I analyze different ways anonymity can be realized. I also discuss some ethical issues, such as privacy, accountability and other values which anonymity may serve or undermine. My theory can also conceptualize anonymity in information systems where, for example, privacy and accountability are at issue.
Traditional human–computerinteraction (HCI) allowed researchers and practitioners to share and rely on the ‘five E’s’ of usability, the principle that interactive systems should be designed to be effective, efficient, engaging, error tolerant, and easy to learn. A recent trend in HCI, however, is that academic researchers as well as practitioners are becoming increasingly interested in user experiences, i.e., understanding and designing for relationships between users and artifacts that are for instance affective, engaging, fun, playable, sociable, (...) creative, involving, meaningful, exciting, ambiguous, and curious. In this paper, it is argued that built into this shift in perspective there is a concurrent shift in accountability that is drawing attention to a number of ethical, moral, social, cultural, and political issues that have been traditionally de-emphasized in a field of research guided by usability concerns. Not surprisingly, this shift in accountability has also received scarce attention in HCI. To be able to find any answers to the question of what makes a good user experience, the field of HCI needs to develop a philosophy of technology. One building block for such a philosophy of technology in HCI is presented. Albert Borgmann argues that we need to be cautious and rethink the relationship as well as the often-assumed correspondence between what we consider useful and what we think of as good in technology. This junction—that some technologies may be both useful and good, while some technologies that are useful for some purposes might also be harmful, less good, in a broader context—is at the heart of Borgmann’s understanding of technology. Borgmann’s notion of the device paradigm is a valuable contribution to HCI as it points out that we are increasingly experiencing the world with, through, and by information technologies and that most of these technologies tend to be designed to provide commodities that effortlessly grant our wishes without demanding anything in return, such as patience, skills, or effort. This paper argues that Borgmann’s work is relevant and makes a valuable contribution to HCI in at least two ways: first, as a different way of seeing that raises important social, cultural, ethical, and moral issues from which contemporary HCI cannot escape; and second, as providing guidance as to how specific values might be incorporated into the design of interactive systems that foster engagement with reality. (shrink)
The information revolution has fostered the rise of new ways of waging war, generally by means of cyberspace-based attacks on the infrastructures upon which modern societies increasingly depend. This new way of war is primarily disruptive, rather than destructive; and its low barriers to entry make it possible for individuals and groups (not just nation-states) easily to acquire very serious war-making capabilities. The less lethal appearance of information warfare and the possibility of cloaking the attacker''s true identity put serious pressure (...) on traditional just war doctrines that call for adherence to the principles of right purpose, duly constituted authority, and last resort. Age-old strictures about noncombatant immunity are also attenuated by the varied means of attack enabled by advanced information technologies. Therefore, the nations and societies leading the information revolution have a primary ethical obligation to constrain the circumstances under which information warfare may be used -- principally by means of a pledge of no first use of such means against noncombatants. (shrink)
In this contribution, we identify and clarifysome distinctions we believe are useful inestablishing the reliability of information onthe Internet. We begin by examining some of thesalient features of information that go intothe determination of reliability. In so doing,we argue that we need to distinguish contentand pedigree criteria of reliability and thatwe need to separate issues of reliability ofinformation from the issues of theaccessibility and the usability of information.We then turn to an analysis of some commonfailures to recognize reliability orunreliability.
Biometrics is often described as `the next big thingin information technology'. Rather than IT renderingthe body irrelevant to identity – a mistaken idea tobegin with – the coupling of biometrics with ITunequivocally puts the body center stage. The questions to be raised about biometrics is howbodies will become related to identity, and what thenormative and political ramifications of this couplingwill be. Unlike the body rendered knowable in thebiomedical sciences, biometrics generates a readable body: it transforms the body's surfaces andcharacteristics into (...) digital codes and ciphers to be`read' by a machine. ``Your iris is read, in the sameway that your voice can be printed, and yourfingerprint can be read'', by computers that, in turn,have become ``touch-sensitive'', and endowed with seeingand hearing capacities. Thus transformed into readable``text'', the meaning and significance of the biometricbody will be contingent upon ``context'', and therelations established with other ``texts''. Thesemetaphors open up ways to investigate the differentmeanings that will become attached to the biometricbody and the ways in which it will be tied toidentity. This paper reports on an analysis of plans andpractices surrounding the `Eurodac' project, aEuropean Union initiative to use biometrics (specif.fingerprinting) in controlling illegal immigration andborder crossings by asylum seekers. (shrink)
In discussions on the ethics of surveillanceand consequently surveillance policy, thepublic/private distinction is often implicitlyor explicitly invoked as a way to structure thediscussion and the arguments. In thesediscussions, the distinction public and private is often treated as a uni-dimensional,rigidly dichotomous and absolute, fixed anduniversal concept, whose meaning could bedetermined by the objective content of thebehavior. Nevertheless, if we take a closerlook at the distinction in diverse empiricalcontexts we find them to be more subtle,diffused and ambiguous than suggested. Thus,the paper argues (...) for the treatment of thesedistinctions as multi-dimensional, continuousand relative, fluid and situational orcontextual, whose meaning lies in how they areinterpreted and framed. However, the aim ofthis paper is not to finally sort things out. The objective is rather to demonstrate thecomplexities of the distinction in variouscontexts and to suggest that those using thedistinction, when considering the ethics andpolitics of surveillance technologies, wouldbenefit from more clearly specifying whichdimensions they have in mind and how theyrelate. (shrink)
This article shows how common morality can be helpful in clarifying the discussion of ethical issues that arise in computing. Since common morality does not always provide unique answers to moral questions, not all such issues can be resolved, however common morality does provide a clear answer to the question whether one can illegally copy software for a friend.
The present study examines certain challenges that KDD (Knowledge Discovery in Databases) in general and data mining in particular pose for normative privacy and public policy. In an earlier work (see Tavani, 1999), I argued that certain applications of data-mining technology involving the manipulation of personal data raise special privacy concerns. Whereas the main purpose of the earlier essay was to show what those specific privacy concerns are and to describe how exactly those concerns have been introduced by the use (...) of certain KDD and data-mining techniques, the present study questions whether the use of those techniques necessarily violates the privacy of individuals. This question is considered vis-à-vis a recent theory of privacy advanced by James Moor (1997). The implications of that privacy theory for a data-mining policy are also considered. (shrink)
The paper has three parts. First, a survey and analysis is given ofthe structure of individual rights in the recent EU Directive ondata protection. It is argued that at the core of this structure isan unexplicated notion of what the data subject can `reasonablyexpect' concerning the further processing of information about himor herself. In the second part of the paper it is argued thattheories of privacy popular among philosophers are not able to shed much light on the issues treated in (...) the Directive, whichare, arguably, among the central problems pertaining to theprotection of individual rights in the information society. Inthe third part of the paper, some suggestions are made for a richerphilosophical theory of data protection and privacy. It is arguedthat this account is better suited to the task of characterizingthe central issues raised by the Directive. (shrink)
This article presents an overview of significant issues facing contemporary information professionals. As the world of information continues to grow at unprecedented speed and in unprecedented volume, questions must be faced by information professionals. Will we participate in the worldwide mythology of equal access for all, or will we truly work towards this debatable goal? Will we accept the narrowing of choice for our corresponding increasing diverse clientele? Such questions must be considered in a holistic context and an understanding of (...) the many levels of information inequities is requisite.Beginning with an historical perspective, Buchanan presents Mustapha Masmoudi''s seminal review of forms of information inequities. She then describes qualitative forms of inequities, such as information imperialism and cultural bias embedded in such practices as cataloging and classification. Following, a review of quantitative inequities is presented. Such issues as the growing commoditization of information and information services demand attention from the ethical perspective. And, finally, the Internet and implications surrounding the world-wide dissemination of information is discussed. (shrink)
This article addresses the question of whetherpersonal surveillance on the world wide web isdifferent in nature and intensity from that inthe offline world. The article presents aprofile of the ways in which privacy problemswere framed and addressed in the 1970s and1990s. Based on an analysis of privacy newsstories from 1999–2000, it then presents atypology of the kinds of surveillance practicesthat have emerged as a result of Internetcommunications. Five practices are discussedand illustrated: surveillance by glitch,surveillance by default, surveillance bydesign, surveillance by (...) possession, andsurveillance by subject. The article offerssome tentative conclusions about theprogressive latency of tracking devices, aboutthe complexity created by multi-sourcing, aboutthe robustness of clickstream data, and aboutthe erosion of the distinction between themonitor and the monitored. These trendsemphasize the need to reject analysis thatframes our understanding of Internetsurveillance in terms of its impact onsociety. Rather the Internet should beregarded as a form of life whose evolvingstructure becomes embedded in humanconsciousness and social practice, and whosearchitecture embodies an inherent valence thatis gradually shifting away from the assumptionsof anonymity upon which the Internet wasoriginally designed. (shrink)