The development of ever smaller integrated circuits at the sub-micron and nanoscale—in accordance with Moore’s Law—drives the production of very small tags, smart cards, smart labels and sensors. Nanoelectronics and submicron technology supports surveillance technology which is practically invisible. I argue that one of the most urgent and immediate concerns associated with nanotechnology is privacy. Computing in the twenty-first century will not only be pervasive and ubiquitous, but also inconspicuous. If these features are not counteracted in design, they will facilitate (...) ubiquitous surveillance practices which are widely available, cheap, and intrusive. RFID technology is an instructive example of what nanotechnology has in store for privacy. (shrink)
Marcin Lewinski: Internet Political Discussion Forums as an Argumentative Activity Type. A Pragma-dialectical Analysis of Online Forms of Strategic Manoeuvring in Reacting Critically Content Type Journal Article Pages 255-259 DOI 10.1007/s10503-011-9201-3 Authors Paul van den Hoven, Utrecht University, Utrecht, The Netherlands Journal Argumentation Online ISSN 1572-8374 Print ISSN 0920-427X Journal Volume Volume 25 Journal Issue Volume 25, Number 2.
Van den Belt recently examined the notion that synthetic biology and the creation of ‘artificial’ organisms are examples of scientists ‘playing God’. Here I respond to some of the issues he raises, including some of his comments on my previous discussions of the value of the term ‘life’ as a scientific concept.
The present discussion of sociobiological approaches to ethnic nepotism takes Pierre van den Berghe ʼs theory as a starting point. Two points, which have not been addressed in former analyses, are considered to be of particular importance. It is argued that the behavioral mechanism of ethnic nepotism—as understood by van den Berghe—cannot explain ethnic boundaries and attitudes. In addition, I show that van den Bergheʼs central premise concerning ethnic nepotism is in contradiction to Hamiltonʼs formula, the essential principle of kin (...) selection theory. It is further discussed how other approaches that make reference to ethnic nepotism are related to van den Bergheʼs account and its problems. I conclude with remarks on the evolutionary explanation of ethnic phenomena. (shrink)
We revisit the characterization of the Shapley value by van den Brink (Int J Game Theory, 2001, 30:309–319) via efficiency, the Null player axiom, and some fairness axiom. In particular, we show that this characterization also works within certain classes of TU games, including the classes of superadditive and of convex games. Further, we advocate some differential version of the marginality axiom (Young, Int J Game Theory, 1985, 14: 65–72), which turns out to be equivalent to the van den Brink (...) fairness axiom on large classes of games. (shrink)
The impact of the Internet on democracy is a widely discussed subject. Many writers view the Internet, potentially at least, as a boon to democracy and democratic practices. According to one popular theme, both e-mail and web pages give ordinary people powers of communication that have hitherto been the preserve of the relatively wealthy (Graham 1999, p. 79). So the Internet can be expected to close the influence gap between wealthy citizens and ordinary citizens, a weakness of many procedural democracies.
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, and people’s ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons, policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical (...) issues pertain to being proactive in addressing such issues at an early stage of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The authors outline how the description of emerging ICTs can be used for an ethical analysis. (shrink)
Neither the apparently cold-blooded murder of a complete stranger, the central event in The Stranger, nor Hugo's murder of Hoederer in Dirty Hands—a political assassination or crime of passion, depending on how one views it—can be considered unusual acts, in literature or in life. The topic of murder has itself created an extremely popular genre: the detective novel or "whodunit," which has become a huge industry and has aficionados everywhere, Sartre being one. In French theater, the topic of political assassination (...) has resulted in such famous plays as de Musset's Lorenzaccio (1834), which ostensibly deals with Florence in the sixteenth century and the tyrannical Alexandre de Médicis, who is assassinated by his young cousin, but is in fact "a limpid transposition of the failed revolution of July 1830." It is well known that Sartre was an admirer of Musset and Romantic theater. In 1946, Jean Cocteau, who helped with the staging of Les Mains sales (Dirty Hands), wrote L'Aigle ` deux têtes (The Two-Headed Eagle), which was inspired "by the sad life of Empress Elisabeth of Austria and her tragic death by the hand of the Franco-Italian assassin, Luigi Lucheni." Sartre himself, in Nausea, has Anny use the engraving in Michelet's Histoire de France depicting the assassination of the Duke de Guise as a perfect illustration of "privileged situations.". (shrink)
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, their ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical issues (...) would be to be proactive and address such issues at early stages of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The paper goes on to outline some of the preliminary findings of a European research project that has applied this method. (shrink)
We argue that nano-technology in the form of invisible tags, sensors, and Radio Frequency Identity Chips (RFIDs) will give rise to privacy issues that are in two ways different from the traditional privacy issues of the last decades. One, they will not exclusively revolve around the idea of centralization of surveillance and concentration of power, as the metaphor of the Panopticon suggests, but will be about constant observation at decentralized levels. Two, privacy concerns may not exclusively be about constraining information (...) flows but also about designing of materials and nano-artifacts such as chips and tags. We begin by presenting a framework for structuring the current debates on privacy, and then present our arguments. (shrink)
Computer ethicists have for some years been troubled by the issue of how to assign moral responsibility for disastrous events involving erroneous information generated by expert information systems. Recently, Jeroen van den Hoven has argued that agents working with expert information systems satisfy the conditions for what he calls epistemic enslavement. Epistemically enslaved agents do not, he argues, have moral responsibility for accidents for which they bear causal responsibility. In this article, I develop two objections to van den (...) class='Hi'>Hoven’s argument for epistemic enslavement of agents working with expert information systems. (shrink)
It is argued that Pettit’s conception of “contestatory democracy” is superior to deliberative, direct and epistemic democracy. The strong and weak points of these conceptions are discussed drawing upon the work of a.o Bruce Bimber. It is further argued that ‘contestation’ and ‘information’ are highly relevant notions in thinking about, just, viable and sustainable design for E-democracy.
J. van den Hoven suggested to analyse privacy from the perspective of informational justice, whereby he referred to the concept of distributive justice presented by M. Walzer in “ Spheres of Justice ”. In “privacy as contextual integrity” Helen Nissenbaum did also point to Walzer’s approach of complex equality as well to van den Hoven’s concept. In this article I will analyse the challenges of applying Walzer’s concept to issues of informational privacy. I will also discuss the possibilities (...) of framing privacy from the point of the “art of separation” by looking at the intersection of information infrastructures and institutions. (shrink)
When thinking about ethics, technology is often only mentioned as the source of our problems, not as a potential solution to our moral dilemmas. When thinking about technology, ethics is often only mentioned as a constraint on developments, not as a source and spring of innovation. In this paper, we argue that ethics can be the source of technological development rather than just a constraint and technological progress can create moral progress rather than just moral problems. We show this by (...) an analysis of how technology can contribute to the solution of so-called moral overload or moral dilemmas. Such dilemmas typically create a moral residue that is the basis of a second-order principle that tells us to reshape the world so that we can meet all our moral obligations. We can do so, among other things, through guided technological innovation. (shrink)
An analysis of a broad sample of Dutch judicial and semi-judicial decisions shows similar structures as the ones Bhatia and Mazzi found before. The question is posed what explains this seemingly unchangeable judicial format. From a perspective of argumentative and communicative efficacy and comprehensibility, the format is certainly not the optimal choice. The explanation is that the format is a sign of an ideology. The format suggests an objectivity of the decision taken. This is actually a myth. This makes a (...) decision to change the format an ideological one. (shrink)
Thanks to the kind cooperation of Mrs. Elise Harding-Davis, director of the North American Black Historical Museum and Cultural Centre, we are able to reproduce the score of this famous melody which features so prominently in Sartre's Nausea. This museum is located in Amherstburg, Ontario, some thirty kilometers southwest of the Ambassador Bridge which links Detroit, Michigan with Windsor, Ontario. Shelton Brooks, who composed the melody in 1910, was a descendent of black slaves who made their way to freedom by (...) way of "the underground railway" and settled in Southwestern Ontario. He was born in Amherstburg; toured widely in Canada, the United States and Europe and he finally settled in Fontana, California where he died in 1975 at age 86. In the conclusion of Nausea Roquentin identifies him incorrectly as a New York Jew and refers to the singer as black. In fact the composer was an Afro-Canadian while the singer was New Yorker Sophie Tucker, who was Jewish. (shrink)
In this paper we present an executable approach to model interactions between agents that involve sensitive, privacy-related information. The approach is formal and based on deontic, epistemic and action logic. It is conceptually related to the Belief-Desire-Intention model of Bratman. Our approach uses the concept of sphere as developed by Waltzer to capture the notion that information is provided mostly with restrictions regarding its application. We use software agent technology to create an executable approach. Our agents hold beliefs about the (...) world, have goals and commitment to the goals. They have the capacity to reason about different courses of action, and communicate with one another. The main new ingredient of our approach is the idea to model information itself as an intentional agent whose main goal it is to preserve the integrity of the information and regulate its dissemination. We demonstrate our approach by applying it to an important process in the insurance industry: applying for a life insurance. In this paper we will: (1) describe the challenge organizational complexity poses in moral reasoning about informational relationships; (2) propose an executable approach, using software agents with reasoning capacities grounded in modal logic, in which moral constraints on informational relatio nships can be modeled and investigated; (3) describe the details of our approach, in which information itself is modeled as an intentional agent in its own right; (4) test and validate it by applying it to a concrete ‘hard case’ from the insurance industry; and (5) conclude that our approach upholds and offers potential for both research and practical application. (shrink)
An argumentative text can be reconstructed as an argumentative discussion between a protagonist and an antagonist. However, such a text is usually not a literal report of a discussion. It is the author of the text who determines how issues are presented, how claims are modeled, how the development of the discussion is presented. Especially when a text has embedded discourse voices that can fulfill the roles of protagonist or antagonist, the author of the text can strongly suggest a specific (...) assignment, suppressing alternatives. In this article examples are presented that show how an author exploits linguistic means—a strategic choice of causal connectives—to suggest a specific reconstruction. The question is raised whether a derailment of this behavior of the author should be characterized as committing the fallacy of the straw man. (shrink)
Vaccination programmes against infectious diseases aim to protect individuals from serious illness but also offer collective protection once a sufficient number of people have been immunized. This so-called ‘herd immunity’ is important for individuals who, for health reasons, cannot be immunized or who respond less well to vaccines. For these individuals, it is pivotal that others establish group protection. However, herd immunity can be compromised when people deliberately decide not to be immunized and benefit from the herd’s protection. These agents (...) are often referred to as free riders: their omissions are deemed to be unfair to those who do contribute to the collective’s health. This article addresses the unfairness of such ‘free riding’. An argument by Garett Cullity is examined, which asserts that the unfairness of moral free riding lies neither in one’s intentions, nor in one’s reluctance to embrace a public good. This argument offers a strong basis for justifiably arguing that free riding is unfair. However, it is then argued that other considerations also need to be taken into account before simply holding free riding against non-compliers. (shrink)
Although applications are being developed and have reached the market, nanopharmacy to date is generally still conceived as an emerging technology. Its concept is ill-defined. Nanopharmacy can also be construed as a converging technology, which combines features of multiple technologies, ranging from nanotechnology to medicine and ICT. It is still debated whether its features give rise to new ethical issues or that issues associated with nanopharma are merely an extension of existing issues in the underlying fields. We argue here that, (...) regardless of the alleged newness of the ethical issues involved, developments occasioned by technological advances affect the roles played by stakeholders in the field of nanopharmacy to such an extent that this calls for a different approach to responsible innovation in this field. Specific features associated with nanopharmacy itself and features introduced to the associated converging technologies- bring about a shift in the roles of stakeholders that call for a different approach to responsibility. We suggest that Value Sensitive Design is a suitable framework to involve stakeholders in addressing moral issues responsibly at an early stage of development of new nanopharmaceuticals. (shrink)
A systematic rhetorical analysis may reveal elements of multimodal argumentative discourse that would otherwise remain hidden. In this article, we present simultaneously (both) the basics of the method we have developed to integrate theories about different modalities in one parallel processing framework for rhetorical analysis and the results of its application to an intriguing ad.
This article analyzes articles and interviews published in Sartre on Theater and focuses on five plays ( Bariona , The Flies , No Exit and The Condemned of Altona ) in order to arrive at a coherent conception of Sartre's theater. Sartre views the stage as “belonging to a different imaginary realm“ in which the characters' language, gestures and the props function in a synecdochical relationship in respect to the spectators. It is their task to grasp these “signs“ and bundle (...) them into a coherent and meaningful whole. Because Sartre views the theater as an imaginary realm, he can free himself from the strictures of his philosophy: 1) the irreversibility of time; 2) the fact that life does not give us a second chance; and 3) that death means that our life falls into the public domain. This freedom allows Sartre to deal with temporality in a novel way and to deal with “life after death“ as life simply continued. Conversely, he can scramble temporality for psychological reasons in order to bring out deep rooted personal conflicts, as he does in The Condemned of Altona. (shrink)
In this paper, we consider the meaning, roles, and uses of trust in the economic and public domain, focusing on the task of designing systems for trust in information technology. We analyze this task by means of a survey of what trust means in the economic and public domain, using the model proposed by Lewicki and Bunker, and using the emerging paradigm of value-sensitive design. We explore the difficulties developers face when designing information technology for trust and show how our (...) analysis in conjunction with existing engineering design methods provides means to address these difficulties. Our main case concerns a concrete problem in the economic domain, namely the transfer of control from customs agencies to companies. Control of individual items is increasingly untenable and is replaced by control on the level of companies aimed at determining whether companies can be trusted to be in control of their business and to be in compliance with applicable regulations. This transfer sets the task for companies to establish this trust by means of information technology systems. We argue that this trust can be achieved by taking into account philosophical analyses of trust and by including both parties in the trust relationship as clients for whom the information technology systems are to be designed. (shrink)
In this article we argue that discourse structure constrains the set ofpossible constituents in a discourse that can provide the relevantcontext for structuring information in a target sentence, whileinformation structure critically constrains discourse structureambiguity. For the speaker, the discourse structure provides a set of possible contexts for continuation while information structure assignment is independent of discourse structure. For the hearer, the information structure of a sentence together with discourse structure instructs dynamic semantics how rhematic (...) information should be used to update the meaning representation of the discourse (Polanyi and van den Berg, 1996). (shrink)
Editorial: Concepts of Animal Welfare Content Type Journal Article Pages 93-103 DOI 10.1007/s10441-011-9134-0 Authors Kristin Hagen, Europäische Akademie zur Erforschung von Folgen wissenschaftlich-technischer Entwicklungen Bad Neuenahr-Ahrweiler GmbH, Wilhelmstr. 56, 53474 Bad Neuenahr-Ahrweiler, Germany Ruud Van den Bos, Behavioural Neuroscience, Animals in Science and Society, Faculty of Veterinary Medicine, Rudolf Magnus Institute of Neuroscience, Utrecht University, Yalelaan 2, 3584 CM Utrecht, The Netherlands Tjard de Cock Buning, Department of Biology and Society (ATHENA Institute), Faculty of Earth and Life Sciences, Vrije Universiteit, (...) De Boelelaan 1087, 1081 HV Amsterdam, The Netherlands Journal Acta Biotheoretica Online ISSN 1572-8358 Print ISSN 0001-5342 Journal Volume Volume 59 Journal Issue Volume 59, Number 2. (shrink)
In this paper, I argue against Peter van Inwagen’s claim (in “Free Will Remains a Mystery”), that agent-causal views of free will could do nothing to solve the problem of free will (specifically, the problem of chanciness). After explaining van Inwagen’s argument, I argue that he does not consider all possible manifestations of the agent-causal position. More importantly, I claim that, in any case, van Inwagen appears to have mischaracterized the problem in some crucial ways. Once we are clear on (...) the true nature of the problem of chanciness, agent-causal views do much to eradicate it. (shrink)
I. Introduction “We can and do see the truth about many things: ourselves, others, trees and animals, clouds and rivers—in the immediacy of experience.”1 Absent from Bas van Fraassen’s list of those things we see are paramecia and mitochondria. We do not see such things, van Fraassen has long maintained, because they are unobservable, that is, they are undetectable by means of the unaided senses.2 But notice that these two notions—what we can see in the “immediacy” of experience and what (...) is detectable by means of the unaided senses—are not the same. There is no incoherence in maintaining that the immediacy of experience is capable of disclosing to us truths concerning entities that are not detectable by the naked eye. And so, I claim, it does; science and technology provide us with the means to see things we have never seen before. Some of those things are van Fraassen’s unobservables. That suggestion is nothing new. Grover Maxwell long ago emphasized the continuity between seeing with and without instrumentation.3 Van Fraassen originally provided two responses to Maxwell’s arguments: some things that you can see with instruments you can also see without instruments (and those are the observables); and.. (shrink)
The anti-reductionist who wants to preserve the causal efficacy of mental phenomena faces several problems in regard to mental causation, i.e. mental events which cause other events, arising from her desire to accept the ontological primacy of the physical and at the same time save the special character of the mental. Psychology tries to persuade us of the former, appealing thereby to the results of experiments carried out in neurology; the latter is, however, deeply rooted in our everyday actions and (...) beliefs and despite the constant opposition of science still very much alive. Difficulties, however, arise from a combination of two claims that are widely accepted in philosophy of mind, namely, physical monism and mental realism, the acceptance of which leads us to the greatest problem of mental causation: the problem of causal exclusion. Since physical causes alone are always sufficient for physical effects mental properties are excluded from causal explanations of our behaviour, which makes them “epiphenomenal”. The article introduces Van Gulick’s solution to the exclusion problem which tries to prove that physical properties, in contrast to mental properties, do not have as much of a privileged status with respect to event causation as usually ascribed. Therefore, it makes no sense to say that physical properties are causally relevant whereas mental properties are not. This is followed by my objection to his argument for levelling mental and physical properties with respect to causation of events. I try to show that Van Gulick’s argument rests on a premise that no serious physicalist can accept. (shrink)
Van Heijenoort’s main contribution to history and philosophy of modern logic was his distinction between two basic views of logic, first, the absolutist, or universalist, view of the founding fathers, Frege, Peano, and Russell, which dominated the first, classical period of history of modern logic, and, second, the relativist, or model-theoretic, view, inherited from Boole, Schröder, and Löwenheim, which has dominated the second, contemporary period of that history. In my paper, I present the man Jean van Heijenoort (Sect. 1); then (...) I describe his way of arguing for the second view (Sect. 2); and finally I come down in favor of the first view (Sect. 3). There, I specify the version of universalism for which I am prepared to argue (Sect. 3, introduction). Choosing ZFC to play the part of universal, logical (in a nowadays forgotten sense) system, I show, through an example, how the usual model theory can be naturally given its proper place, from the universalist point of view, in the logical framework of ZFC; I outline another, not rival but complementary, semantics for admissible extensions of ZFC in the very same logical framework; I propose a way to get universalism out of the predicaments in which universalists themselves believed it to be (Sect. 3.1). Thus, if universalists of the classical period did not, in fact, construct these semantics, it was not that their universalism forbade them, in principle, to do so. The historical defeat of universalism was not technical in character. Neither was it philosophical. Indeed, it was hardly more than the victory of technicism over the very possibility of a philosophical dispute (Sect. 3.2). (shrink)